Is it something to do with Base 8 Numbers (8/64/1024) not Base 10 (10/100/1000)?
Also why is it 1024MB to 1GB, but 1000GB to 1TB?
Phage0070: > Is it something to do with Base 8 Numbers (8/64/1024) not Base 10 (10/100/1000)?
Almost there. Computers don’t use base 8 they use binary, ones or zeroes. It is (2,4,8,16,32,64,128,256,512,1024) as you increase in binary digit places.
inckorrect: To expend on what has already been said. Imagine a small box where you can store a 0 or a 1. That’s a bit. Now imagine that you have 2 of them side by side. To control in which one you’re going to read/write you put a switch before them. If the switch is in one position, the bit goes in one of this box, and if it’s in the other position it goes in the other box. Now imagine that you want 3 boxes instead of two. If you have only one switch that’s not going to cut it. The switch can decide only between 2 positions. So you need one more switch. But if you put 2 switches then you can decide between 4 positions and not just 3. Putting only 3 boxes is then kind of wasteful when with the same circuit you can put 4 of them.
Now use the same logic with very big numbers and you’ll understand why everything is base 2 with semi-conductors.
Psyk60: 1024 is a nice round number in binary. It’s 10000000000. So for many purposes it’s easier to take 1024 bytes as a unit than 1000.
There are different definitions of units like KB, MB, GB and so on. Hard disk manufacturers in particular take 1GB to be exactly 1000MB instead of 1024.
Supposedly 1024MB is actually one Gi*bi*byte, but barely anyone calls it that. Most people would call it a gigabyte.
the6thReplicant: A bit is the smallest possible piece of information I can query. It can be in two states: 0 or 1; on or off; up or down; etc
So one bit gives me two possible states. If I have two bits then I have 4 possible states: 00, 01, 10 and 11.
So if I have n bits I can have 2^n possible states.
The other comments go from here.
DrDimebar: There are 10 types of people in the world: Those who understand binary, and those who don’t.
my_name_is_cooler: Binary. Computer switch is on or off, electricity going through or not going through. Binary is powers of 2, 2/4/8/16/32/64/128/256/512/1024/2048/4096. That’s why those numbers sound familiar even if you suck with computers, that’s what every memory card, phone storage, computer storage comes in.
Workacct1484: Computers only understand two things.
* “On” and “Off”
* 1 and 0
So using this they don’t count 0,1,2,3,4,5,6,7,8,9.
They count 0,1
So how do they express 2? The same way we express 10. Add another digit.
So a computer counts like this:
* 0 – 0
* 1 – 1
* 10 -2
* 11 -3
* 100 – 4
* 101 – 5
* 110 -6
* 111 – 7
Now as a side effect of this, 1000 is not a nice round number to them like it is to us. Using 1000 would be wasteful.
* 1,000 = 1111101000
* 1023 = 1111111111 – See how **every** bit is used?
Now wait a second Sergei, I thought you said 1024? Why only 1023?
Well computers don’t start at 1. They start at 0. So there are 1024 different values. 1-1023 + the 0 value for a total of 1024.
c_delta: When counting things that by their nature only occur in powers of 2, 1024 is close enough to 1000 to just refer to it as “1k”. 1k is exactly 1000, has been since the word “kilo” was invented, but everyone dealing with that stuff would know that exactly 1000 was unlikely and that it would probably be the closest power of 2 instead. That works well up to around 32k. 32768 is closer to 33000 than to 32000, and 65536 is already larger than 65000, but still referred to as 64k because it would just seem weird to jump to 33k or 65k. So a convention was born where 1024 was no longer “about 1k”, but “1k”, when dealing with things that are usually powers of 2. While no formal standard recognizes this, it becomes common in computer parlance to define kilo-, Mega- and Gigabytes like that.
Meanwhile, other professions are getting concerned about bytes. Disk and tape drives store data sequentially, so there is absolutely no reason to use a power of 2. Communication links transmit a given number of bits in a certain time frame. Time can be arbitrarily divided, again no need for a power of 2. The standard for literally everything is that k means 1000, so that is what they go with. Besides, if you want to sell how much of something you can offer, you get the more flattering numbers from that convention, too.
Now, OS vendors on the other hand think from a file system perspective. If we store the amount of bytes in 32 bits, how much space will there be in total? 2^32, or 4 binary Gbytes. That is almost 4.3 decimal ones, but who cares about decimal?
Turns out customers got confused because their 2 GB drive only shows up as 1.86 GB. People blame greedy drive manufacturers.
The confusion finally reaches a standardising body. The IEC does not want to muddy the definition of kilos, Megas and Gigas, so they introduce a new set of prefices for their binary equivalents: kibi, Mebi and Gibi finally formalize the discrepancy between powers of 2 and powers of 10. There was much rejoicing by techies, and you can often find GiBs on systems used primarily by the technologically literate, like GNU/Linux. What about more mainstream systems?
“We are not going to introduce that extra ‘i’. We do not want to confuse the customers.” – “But is the discrepancy not more confusing than that?” – “Technically yes, but they will just blame drive makers.”
The end.
psykojello: Not an answer, but if you play the mobile game 2048, it will start making a lot of sense to you 🙂
Farnsworthson: A) It’s to do with binary. If you have 10 binary switches (bits), you can represent 1024 different numbers. And the fact that it’s 10 bits, and the result isn’t very far from 1000, is pretty much why people latched on to it.
Although you’re not far wrong; you can regard binary numbers as being base 8 (“octal”), if you take the switches in clusters of 3. The first computer I ever used was programmed in octal. And it’s pretty much standard in much of the mainframe world to think of the bits in clusters of 4, which effectively gives you base 16 (“hexadecimal”).
B) Blame marketing. Defining 1024 lots of 1024 bytes (1024 kilobytes, in other words) as a megabyte was pretty much established by techies, who cared about accuracy, before computers and their peripherals became mere commodities. Whereas once you’re selling to people who don’t know better, calling 1000GB a Terabyte lets you inflate your marketing boasts.
Sommanker: As others have said, the 1024 is because computers use binary, which is base 2. Further, there are 1000 bytes to a kilobyte, KB, 10^3 bytes. But, there are 1024 bytes to a kibibyte, KiB, 2^10. This applies all the way up. Specifically to your question, there are 1024 mebibytes to a gibibyte, but 1000 gigabytes to a terabyte and 1024 gibibytes to a tebibyte. These ibis and ebis are not commonly used, though. These is the main reason why hard drives seem to have less storage than advertised, because when the manufacturer says 500GB he means gigabytes, but the computer means gibibytes. Hope this helps.
CryTheSly: Computers use the binary number system, using the digits 1 and 0. The reason for this is due to electricity having 2 states, on and off. As binary is base 2, 2^10 is equal to 1024. Representing 1000 as binary is more difficult due to the need to combine different values.
SolomonG: > Also why is it 1024MB to 1GB, but 1000GB to 1TB?
A simpler observation to go along with the technical ones below.
Many people would tell you that it’s 1024GB to 1TB, that was almost certainly the intention of those who first coined those terms as computers work in base 2, and in base 2, 1024 is a round number where 1000 is not.
However, the prefixes they used, mega, giga, tera, etc, already have definitions, 10^6, 10^9, 10^12. So while the guy writing the operating system would call a MB 1024^2 (1048576) bytes, the guy making the hard drive can get away with calling it 10^6 (100000) bytes without technically being misleading.
So his 1 GB hard drive is actually ~7% smaller than one made with the definition 1GB = 1024^2. It’s therefor slightly cheaper to make and 99% of the people buying it don’t know or care about the difference. This is why your computer says your new 1TB hard drive is actually 931GB.
Dapperblook22: Computers store information in binary, which is base 2, not base 8 (octal). This has many advantages; for example we can exploit this base 2 property in signal transmission using parity bits, which are based upon determining the parity (even/odd) of the number of 1s in a message to check if it has an error. This idea can be extended to ECCs (Error Correcting Codes), allowing us to detect and even correct errors in a binary message!
As for the 1024 vs 1000 issue, we typically use powers of 2 when talking about storage. For example, if I have a 1GB file on my hard disk, this would be 1024MB. If we are talking about transmitting information, then we use powers of 10 (if I was downloading a file at a rate of 1GB/s, this would be 1000 MB/s).
There is actually a binary prefix to explicitly represent powers of two, e.g. the mebibyte (1024 kibibytes). This aims to try and stop confusion between the 1000 vs 1024 issue, though I’ve never really seen binary prefixes used at all!
It’s usually up to the manufacturer to decide which convention (1000 or 1024) they decide to use.
Thevgm01: To elaborate on what’s been said, computers run on transistors. These are basically tiny valves that can be turned on or off. If they are on, electricity flows through, and if they are off, electricity is blocked. The state of a series of transistors can be thought of as a series of 1s or 0s, for on or off respectively.
Now let’s talk about number systems. We use base 10, which means that as you count up, as soon as you go past 9, you reset to 0 and add to the left side. The exact same thing happens in base 2 (aka binary). “1, 2, 3, 4, 5, 6, 7” in decimal is “1, 10, 11, 100, 101, 110, 111” in binary. Notice that when all 3 binary digits are 1, the total is 7 in decimal.
While 1000 is a nice, round number in decimal, it’s not in binary. It’s 1111101000. As you can see, there are four 0 bits. This means that we are losing out on some possible storage space. If we were to use every transistor for every possibility (which is 2^10) we would get 1111111111, and if you include the possibility where all bits are 0, this adds to 1024 in decimal.
So 1024 is used because it gives the maximum information storage space possible for 10 transistors. Likewise, if you want even more storage space, you can simply use more groups of transistors.
Edit: word choice
ERRORMONSTER: 1073MB is the number you’re looking for, which is 1024 MiB (pronounced “mibby bytes”) and it’s *approximately* 1 GB. It’s *actually* 1 GiB (pronounced gibby byte.) Modern storage mediums (hard drives, USB drives, etc) will actually have a little more memory in them than is advertised. Over time, the memory will start to fail where it gets written to and read from more often. The individual transistors used to store data only have a certain number of read/write cycles that they’re rated for, so they round the 1073 down to 1024 or 1000 and use the extra ~40 for reliability, to make the drive last a bit longer.
1 Gigabyte is literally 1 billion bytes. 1 Gibibyte is the first power-of-two number of bytes that is greater than 1 billion, that is, 2^30 = 1,073,741,824 bytes.
TL;DR – MiB != MB and GiB != GB, but advertising has conflated the two so they’re basically the same.
formervoater2: Computers use on/off signals to select which group of bits to access so the maximum number of memory words (which are groups of 1/2/4/8 bytes) is always going to be a multiple of the highest number that can be reached with a particular number of binary digits.
OrangeOakie: Computers don’t use base 8 nor base 10. They are all binary (Base 2).
Let’s get (a bit) technical first:
Computers are built with several components.But at their core they’re a bunch of connections that produce a result.
Pretty much everyone knows that computers are related to 1s and 0s. Well, those 1s and 0s represent high and low levels of current in a component/circuit/partOfACircuit/etc..
You can really only have high or low (In theory it’s actually on and off, but in practice it’s safer to be always on, but low or high). You can’t have “not-very-high-but-not-low” because of how electricity works. If you combine a low and an high inputs, you’re getting a high output (it’s an OR Gate).
Since you can only have 2 states, convention has that 0 means low (or off) and 1 means high (or on). This stems from some eletrical properties where a closed circuit has voltage (minimal resistance) and an open one doesn’t (in practice, infinite). Fun Fact, that’s why some [buttons](https://i.stack.imgur.com/oBZxy.png) are like that, depending on whom you ask they represent closed/open circuits or 1s and 0s.
This leads us Math. Since you can have 2 states represented by one *thing*, which we’ll call **bit** from now on. A bit can represent two values, 0 and 1.
If you have 2 bits, then you can have all combinations (00, 01, 10, 11), and with 3 bits (000, 001,…,111) and so on. You can get really creative with how you interpet those bits so you can do a lot with them (which is why we have modern eletronics). Essentially, you can have as many combinations as 2^x , where x is the number of bits you have.
Since 1 bit is really not enough for anything, we started grouping bits into other things to make it easier:
– 4 bits make a **nibble**
– 8 bits make a **byte**
Then as stuff started being easier to produce you’d get ridiculous numbers.
This is why Base 10 isn’t used, nor base 8 is used. Because it’s impossible to use anything other than a bit as your base, which is base 2.
The reason why 1024MB are 1 GB is also because of this.
2^8 = 256 bits (**byte**)
2^10 = 1024
Since it’s not possible to actually have **only** 1000 bytes or MBs or whatnot, it was made into convention that 1024 would represent a kilo (1000).
1 TB are 1024 GB, not 1000. As long as you’re talking about quantity of bits (bytes, MB, etc.), size, it’s always on base 2 because it’s impossible not to be that way.
Don’t mix this with size/time, because it is possible to limit something to 1Mb/s and that being 1000bytes instead of 1024bytes
Some ISPs like to fuck around and use the IS (International System) and refer to their speeds as megabyte when they’re in fact not providing megabytes. Speeds are a different thing, and they get away with that.
SwiftOnSobriety: Computers (often) have an expandable number of memory (RAM) slots. As these need to be accessed _very fast_, the amount of memory needs to follow some standardized pattern, and due to the way logic circuits work, it would be silly for this pattern to be anything other than “powers of two”. As a result RAM, which is generally measured in GB, always comes in powers of two.
While all your RAM slots are addressed as part of the same address space, multiple hard drives (generally measured in TBs) are not. As a result, individual HD manufactures can use whatever sizing scheme they want; using base ten makes their sizes look bigger, so they started doing so.
jadenPete: In the late 90’s the IEC said that a gigabyte was now 1000 megabytes, a megabyte was now 1000 kilobytes, and so on. Nowadays most systems follow that (including disk manufactures). RAM is still based on Base-2, however. The only OS that doesn’t abide by this is Windows, which is why you probably see that.
KapteeniJ: The maximum number you can write with 10 binary digits is 1023.
It’s sorta neat since computers mostly do things is base-2, making 1024 a nice, round number. Like, 1024 in hexadecimal is “400”, while 1000 in hexadecimal is “3E8”. So it’s basically the same reason you want to use 10, 100 and 1000, since they’re nice round numbers in base-10. But in computing, where natively everyone uses base-2 instead, 1024 is the nice round number(10000000000 in binary) and 1000 is just random mess(1111101000 in binary).
But it also happens that this round number in binary, 1024, is also very close to a round number in decimal, so it gives that extra oomph in why this is a good number to use as a base.
D_Dub07: Adding to this, a mebibyte is the proper unit for 1024 bytes, and megabyte is actually defined as 1000 bytes.
Petwins: Bits vs bytes. A byte is 8 bits, so the 1024 is from that base 8 conversion. Computer specs tend to swap the two around assuming no one will pay attention. They are mostly right
matthigast: A byte is 8 bits (a bit is a 0 or a 1) they’re grouped together, there’s a difference between a megabyte and a mebibyte.
1 megabyte is 10^6 (1000000) bytes.
1 mebibyte is 2^20 (1048576) bytes.
Since most people don’t know this companies use the former because it’s both more understandable for the average customer and it sounds bigger (with 1 terrabyte being around 931.3 gibibytes).
MrBetaTheta: Multiples of Four.
Eight bit systems were the most popular first computer, and it scales up proportionally.
**2 x 4 is 8 bit**
**2 x 8 is 16 bit**
**2 x 16 is 32 bit**
**2 x 32 is 64 bit**
**2 x 64 is 128 bit**
**2 x 128 is 256 bit**
Old Nintendo game systems follow this naming. Nintendo 64, etc.
Simm and Dimm memory modules for computers, also follow this convention.
I believe it has more to do with the microprocessor rather than mathematics itself. Has to do with the semiconductor configuration, inside the chip itself.
A transistor, is a switch. 0 or 1. So you don’t need a microprocessor for that.
I just checked and indeed there were 4 bit chips. So the chips cram in four transistor switches.
It is easier to scale a physical microchip based on it’s foundation architecture, rather than make exceptions, such as subtracting 24 transistor switches from a 1024 bit microprocessor.
Hope that helps. I did not study this in school. so double check what I have said with your own searches.
Sidnoea: Something I haven’t seen anybody else mention yet: the reason hard drive manufacturers like to use 1000MB to 1GB or 1000GB to 1TB is so they can sell you less memory while making you think it’s more memory.
Catatonic27: EDIT: I used a couple of wrong numbers here: 2^10 in binary is 1000000000 and 2^8 is 10000000.
EDIT2: You should also probably downvote me just to be safe.
You kinda have to understand variable-base counting to get a firm understanding of this. Let’s just say for sake of argument, that no matter what base you’re counting in, orders of magnitude go up when you raise the exponent of the base.
By way of example: We count in base 10. Our first order of magnitude is 10, which is 10^1. That order goes all the way up to 99 before we have to go to the next order of magnitude [and add a third digit] to 100, which is 10^2. 10^3 is 1000, and 10^4 is 10,000, and so on.
Computers think and count in binary, so their order of magnitude go up by powers of 2. 2^1 is just 2, 2^2 = 4, 2^3 = 8, 16, 32, 64, 128, 256, 512 [you’ll see a lot of powers of 2 if you study computer science for this reason] and finally 1024 which is 2^10 in decimal, or 1111111111 in binary. Digits in binary are called ‘bits’ so we’d call this a 10-bit binary number. For technical reasons, the smallest addressable unit of memory in most computer architectures is actually 8-bits, which is 2^8 , 256, 11111111, or 1-byte.
Because humans think and count in base-10, and because we typically count bytes instead of bits anyways [unless it’s in the context of serial data transmission where bits are transmitted sequentially instead of in groups of 8 like with an internet connection, then we measure data in multiples of bits] we start measuring memory in multiples of thousands of bytes which is why we start seeing nice round base-10 numbers.
There is actually a separate set of prefixes to denote binary numbers that hold true to the powers of two rule and go by multiples of 1024 called kibibyte, mebibyte, gibibyte, etc…But again, because humans think and count in base-10, we rarely use them.