In low level programming either in C or in Assembly language or any other languages, when we are addressing low level components, we are using addresses with hexadecimal value. What is the main reason behind it?
Qaim, let's look at the evolution of the human numbering systems : humans tried base 13, base 11, base 4, base 3, Oh man ! you name it ... until the Hindu-Arabic numbering system BASE 10 was invented. It made everything much easier, from business transactions to handling all sorts of daily interactions including numbers ... Because, we have 10 fingers :)
==============================
How about computers ? It is very clear where the BINARY numbering came from: BASE 2 is the natural representation for CPUs ... TRUE or FALSE, the most NOISE TOLERANT numbering system, which is necessary when you are working at 4GHz, and flipping billions of these BITS a second, and you do not want to mistake a 0 for 1. Any higher base system, Base 16 (i.e., hexadecimal), and BASE 256 (BYTE) is a natural expansion of BINARY by using MULTIPLE BINARY bits ...
Your question translates to : WHY DID WE INITIALLY CHOOSE TO GROUP 4-BITS ... In other words, why not 5 bits ? 5 bits would be much better than 4 ... 2^5=32, so, a BYTE would be 1024, which is much closer to 1000 and easy to understand. Also, 10 bits is a much nicer number than 8 ?? So, why did we choose 4, 8, 16, 32, 64 bits for CPU widths ??? instead of 5, 10, 20, 40, 80 ? or something else ?
============================
For the answer, let's go down the memory lane ... A Japanese calculator company calls a, then, very good semiconductor manufacturer named INTEL in the 70's and asks them to design a specialized chip that is PROGRAMMABLE and can compute things in CHUNKS. They choose a chunk size of 4 bits, since, this is a size that is compatible with that time's technology and can perform the computations of that calculator well. INTEL designs it, but has many manufacturing issues ... They run out of time and cannot deliver the product ... The Japanese calculator manufacturer cancels the order, and INTEL thinks of ways to sell that IC as a PROGRAMMABLE IC. They call it 4004. The product sells well, but, the data size is not big enough. INTEL immediately designs a version of it that can process TWICE AS MANY BITS (8 bits) at a time. 8008 was born ! This evolves to 8080, 8085, 8086 (16 bits) and we know the rest of the story.
================================
I absolutely positively cannot find any reason why a 10-bit, 20-bit, or a 40-bit CPU would not actually work better. So, this makes me wonder, what would happen, if the Japanese manufacturer asked for a 5 bit programmable chip initially ?
First of all, Computers are still operating on Binary logic (only two states 0-1) . Hence, if we use a Decimal (0-9) system for addressing, interpreters will have the additional burden to convert Decimal to Binary words (bit stream). In contrast, Hexadecimal numbers are easy to convert into Binary words. We just need to group 4 bits one after another.
For that mater, any numbering system with base 2^x ( i.e. 2, 4, 8, 16, 32...etc.) will do the job. For example, Octal will also do the job as well, where we need to group 3 bits instead. Octal has been used of the 12-bit, 24-bit, and 36-bit processors. E.g. PDP-8, ICL 1900 and IBM mainframes (http://en.wikipedia.org/wiki/Octal#Usage). Since, hexadecimal is a well established addressing system since 16 bit processors (e.g. 8086), it has been still followed.
Dear Qaim Mehdi Rizvi, Machine language is 'binary' (2^1). Hexadecimal which is (2^4) makes much easier i.e. a group of 4-bits is now converted into single bit. But we should not confuse that machine is getting the hexadecimal; it is not. Like an interpreter a converter is used (Hexadecimal to Binary and Binary to Hexadecimal) is used. We use to speedup and to save memory.
I would agree with Sujit Sahoo. To me the hexadecimal representation seems like a reasonable compromise between length (of numbers / memory addresses) and 'readability' (in the sense that a hexadecimal representation can easily be converted into a binary representation).
I guess it is used due to 2^(2^2) = 16, the binary system and as Sujit Sahoo and Claas Ahlrichs addressed before, it is easily handled (including the alphabet), whereas the power of representation is sufficient. However, the Russian development of ternary computer systems have not reached the market (for a reference see "Hunger: SETUN. An Inquiry into the Soviet Ternary Computer").
I doubt, that 2^(2^(2^2))= 256 is easily understood by humans (due to the complex alphabet), whereas 2^3 does not explot the decimal system which is used humans ...
Legacy. 30+ yrs ago, when I was an undergrad, we were programming on mainframes (32 bit machines) - and assembly language programming was done via a 'yellow' card. If your code got an exception (S0C4, S0C7), you got a listing of the contents of core memory at the time of the error. Somehow looking at pages and pages of Hexadecimal numbers was much easier than the possibility of reams of pages in binary.
Some minicomputers were using octal notation, which is why languages like C support values in decimal, octal, and hex.
32 bit registers could be loaded with phonetic binary values so that if you dumped them you could determine immediately if any had changes. My favorite preloaded value is still 'deadbeef'
Hexadecimal code requeriment in computer are using for 16 memory register, but follow using Octal data , i think what dynamic memory is the caused the change.
Dear All, Assembly language is nothing but culmination of Hexadecimal code. The address lines are not in hexadecimal. A Program controller of Microprocessors is a register its either 4004, 8085, 8086, 8088, ...
First , It is easy to map remap from Hex to binary (each hexa digit has four corresponding binary, as such the conversion to/from binary is easier).
Second, the space when specify the address range ( as in IPV6 network) Another example when store the readable configuration data (e.g., Cisco Routers, BIOS setting) in a file or EPROM.
Part of it is historical, part of it is practical. Back in the 'good old days', computer memory chips were organized in 'nibbles', i.e.: 4 bits. They could easily be concatenated into longer 'bytes' and 'words' of 8 and 16 bits respectively with a clever interconnect scheme. So, it was natural to express computer code in terms of 'nibble's, that is, hexadecimal numbers. Thus the representation more closely reflects the meaning.
The other part is practical. Computers are still binary machines. And, until quantum computers are perfected, we'll be stuck with notation schemes that directly express the binary nature of the data. For example we could express numbers in any handy base or symbolic system, like decimal, or even Roman numerals. But, neither of those systems are convenient when dealing with computers. So, for convenience, any system should be in a base 2 or multiple there of. Again, the representation more closely reflects the meaning.
As a programmer of several decades, I can say raw binary, hex and oct are all still very popular. It all depends on context. For example, I was just working with gnuplot, making some plots for a presentation. Gnuplot wanted special ASCII characters (e.g: the Greek character sigma) in octal.
But, if you look at Unicode tables, they're invariably expressed in hex. That's because Unicode is an extension to ASCII codes that are byte values. ASCII has traditionally been expressed in two-digit hexadecimal to reflect the underlying two nibble structure of the code. Unicode, AKA double byte, character representation is just an extension of ASCII and hexadecimal more closely reflects the underlying meaning. Fortunately, 2^16 values is adequate to represent every single alphabet on the planet.
I also do a lot of LabVIEW coding. There, binary, hex and octal are all very useful number representations, depending on what you'd like to do. For example, stringing a bunch of Booleans together in an array is easily considered in binary.The representation more closely reflects the meaning. Expressing that Boolean array as a decimal would completely obscure the meaning.
I also work with Python, C and C++. If I'm just adding two numbers, decimal works just peachy. But, if I want to do something like a logical operations, e.g.: bitwise AND, or an arithmetic shift, hexadecimal or even binary is most appropriate. That is, I choose a representation that reflects the underlying meaning.
So, when one is writing assembly code, hex is very convenient. The number of op codes is normally limited to a small number (well, except for a CISC architecture). There, a byte (two nibbles, or a two digit hex number) can easily encompass every op code ID. Addresses are normally some multiple of 4, with only very few exceptions. Thus, an address is easily segmented into hexadecimal representation. With a little practice, one can simply look at a hex number and immediately write out the binary number. This is not easily done when looking at a decimal number. (Unless you're very good!). When you're writing assembly code, you're just one step, maybe two steps, up from the raw computer representation. It seems appropriate that the representation we choose to use closely reflects the underlying meaning.
with due respect to all the authors, my intention was not rectify something or to suspect something. as computer science is a practical field, if we commit mistake either in programming or in modelling, we will get error.
dear Peter, we all started with this basics that we have number system and hex is the most capable than any other else. but we don't forget that this is SCIENCE not TRADITION. we should try to look beyond this limit and luckily we all are from the same stream so at least we can discuss and try to find out some solution or we should agree upon a common conclusion.
IN HEX, WE ARE CONSIDER 0-9 (ALL NUMBERS) AND A-F (SIX ALPHABETS). WHAT WOULD HAPPEN IF WE CONSIDER A-Z (ALL ALPHABETS) AND 0-5 (SIX NUMBERS)???
*** Please take it seriously and discuss the possibilities ***
Qaim, let's look at the evolution of the human numbering systems : humans tried base 13, base 11, base 4, base 3, Oh man ! you name it ... until the Hindu-Arabic numbering system BASE 10 was invented. It made everything much easier, from business transactions to handling all sorts of daily interactions including numbers ... Because, we have 10 fingers :)
==============================
How about computers ? It is very clear where the BINARY numbering came from: BASE 2 is the natural representation for CPUs ... TRUE or FALSE, the most NOISE TOLERANT numbering system, which is necessary when you are working at 4GHz, and flipping billions of these BITS a second, and you do not want to mistake a 0 for 1. Any higher base system, Base 16 (i.e., hexadecimal), and BASE 256 (BYTE) is a natural expansion of BINARY by using MULTIPLE BINARY bits ...
Your question translates to : WHY DID WE INITIALLY CHOOSE TO GROUP 4-BITS ... In other words, why not 5 bits ? 5 bits would be much better than 4 ... 2^5=32, so, a BYTE would be 1024, which is much closer to 1000 and easy to understand. Also, 10 bits is a much nicer number than 8 ?? So, why did we choose 4, 8, 16, 32, 64 bits for CPU widths ??? instead of 5, 10, 20, 40, 80 ? or something else ?
============================
For the answer, let's go down the memory lane ... A Japanese calculator company calls a, then, very good semiconductor manufacturer named INTEL in the 70's and asks them to design a specialized chip that is PROGRAMMABLE and can compute things in CHUNKS. They choose a chunk size of 4 bits, since, this is a size that is compatible with that time's technology and can perform the computations of that calculator well. INTEL designs it, but has many manufacturing issues ... They run out of time and cannot deliver the product ... The Japanese calculator manufacturer cancels the order, and INTEL thinks of ways to sell that IC as a PROGRAMMABLE IC. They call it 4004. The product sells well, but, the data size is not big enough. INTEL immediately designs a version of it that can process TWICE AS MANY BITS (8 bits) at a time. 8008 was born ! This evolves to 8080, 8085, 8086 (16 bits) and we know the rest of the story.
================================
I absolutely positively cannot find any reason why a 10-bit, 20-bit, or a 40-bit CPU would not actually work better. So, this makes me wonder, what would happen, if the Japanese manufacturer asked for a 5 bit programmable chip initially ?
heh heh ... nice story Tolga. But, there actually are/were odd bit length CPUs out there. Especially for embedded applications. But, economies of scale and industry standardization has worked to make things pretty homogeneous at n*4 bits wide. So, we're obviously now at 64 bits wide (if you don't count GPU's which are very much wider still). And IPv6 has now also moved to 64 bits wide. Hmm ... let's see ... 2^64 is 18E18 that's a pretty big number. What else in science is up on that scale?
The astronomers tell us there's 1E11 galaxy's, each with 1E11 stars (on average), so that's 1E22 stars in the observable universe. So, unfortunately if we want to give each and every star in the observable universe an 'address', we have to extend our address space to beyond 64 bits.
Now, the human brain has (in round numbers) something like 1E11 neurons and a total of 1E14 synapses (total of 1E25). Huh ... I never thought of it like that before... so anyway, it looks like 64 bit wide addresses is again not enough to enumerate every cell and synapse.
OK, let's think chemistry; I'm a chemist (among other things) after all. A mol is 6.02E23 atoms/particles. So, if we're going to keep track of just 18 grams of water molecules (a mol), we'd need much more than 64 bit wide addresses for that task.
So, will CPU's continue to grow beyond 64 bits wide? Yah, I'm pretty sure about that.
Dear Qaim Mehdi Rizvi, Its very simple answer. Machine language is 'binary' (2^1) i.e. 0, 1 (true, false). Why we use Hexadecimal which is (2^4) is an exploitation of GF theory for making computer process faster. Those who has used punched cards they know how difficult that age was. GF(2^n) is the way where we think in any integer value of n and can make the machine processing simpler. Hexadecimal is GF(2^4). Hexadecimal is one of the number system which uses digits 0,1,2,.....9,A, B,...F. But this is not the end one can use (2^...(2^n)). Further I want to add that why use BCD, ASCII and many more ..Hexadecimal, Octal are the number systems which belong from the family of Binary number system having their roots in Base 2. GF(2^n) theory makes it very easy to manipulate arithmetic, logical, addressing, data transfer, instruction code, ...etc. and that is the main reason of using Hexadecimal. Further I want to add that think why we use ASCII, BCD, 4-16 and 16-4, 3-8 and 8-3 decoders and encoders.
In short: first, hexadecimal notation is maybe the best (for our practical purposes) compromise between word length and number of characters; second, its base is a multiple of the binary basis thus allowing the packaging in tetrads.
Binary is most direct one to use, but too huge and difficult to keep track of. So people grouped them up to have four of them together then found 16 symbols to indicate them 0,1,2....E,F and put them one after the other. Now the thing is small easy to convert to original binary and as straight forward as it could be. This is one and only reason.
@Patrick, GPUs are going to go beyond 64b. The MEMORY BUS sizes are already 128b, 256b, 512b, etc ... Computation size is only 32b for the cores, though ... These choices are all based on technological limitations
================
I would like to remind one fact about why INTEL initially chose 4-bit groupings. The reason is 100% TECHNOLOGICAL, nothing MATHEMATICAL. Before INTEL's ALL-IN-IC programmable chip (later to be cautiously called a MICROPROCESSOR for 4004), IBM most definitely had higher bit width COMPUTERs, but, not MICROPROCESSORs (let's call it uP for short). However, the concept of uP evolved when you started building computers inside a single chip, rather than many many TTL IC's (such as Quad AND gates, registers) soldered on a board. An example of such a COMPUTER, called a MAINFRAME back then is IBM 7030, introduced in the early 60's.
.......... http://en.wikipedia.org/wiki/IBM_7030
In fact, I programmed on one of these computers back in Turkey 20+ years ago :) Our school Istanbul Tech had leased one from IBM for a gazillion dollars on a 20+ year lease :) This computer DID NOT HAVE A uP. The distinction between a COMPUTER and uP came when the VLSI technology advanced far enough, so that, you could stick the 1) register file, 2) controller, 3) ALU, and a bunch of other stuff into a single chip, that, indeed belonged to the CENTRAL PROCESSING UNIT (later to be called CPU).
=================
In the early 70's, the distinction between a uP and a COMPUTER started when INTEL showed that, you can integrate a bunch of the aforementioned components of a uP inside an integrated circuit (IC). The next question to answer was this:
CLEARLY, I CANNOT ACHIEVE THE 20b, 30b, 40b LENGTHS INSIDE THIS FIRST uP. SO, WHAT BIT LENGTH SHOULD I CHOOSE ? If you look at IBM 7030, the definition of the BIT WIDTH is MESSED UP :) It will make your head spin. Some registers are 18 bits, some 19 bits, some can be programmed from 10 bits to 48 bits. It is a PROGRAMMER'S NIGHTMARE !!! :) Yet, it cost $15M :)
==================
So, what bit width would you choose if you were an engineer working at INTEL back in 1971 when designing 4004 ?
Please note that, we are talking about a 10um-technology chip that can integrate only 2,300 transistors, not the 3 BILLION that a modern 28nm GPU can integrate :) This is such a massive limitation that, it will determine the bit width you choose, since if you choose 5-bits for each instruction, you will RUN OUT OF TRANSISTORS TO BUILD 4004. The answer is very clear: Choose a common bit width that can be implemented FOR EVERY INSTRUCTION : FOUR bits.
If INTEL had chosen 5 bits back then, we would see a 5b, 10b, 20b, 40b, 80b evolution ... The reason for that initial choice is NOTHING BUT TECHNOLOGICAL.
What we can do, same thing we can easily do in new number system. Yes we have to make some major changes in architecture but for success we have to make it. Here in this attachment, I provide a comparative view of various number systems and just look the capacity of this number system.
We can also handle memory address and various conversion as well.
Qaim, and Samir, BINARY is most definitely very inefficient. Why ? Because, you are only storing one bit in a voltage level range of, say, 0 to 5 Volts. So, Logic0 is 0 Volts, and Logic 1 is 5 Volts. This seems super inefficient. IT IS !!! But, the reason for that is NOISE TOLERANCE. Let me bring up two other systems, where MORE THAN ONE BIT is packed into this voltage range:
=================
First one is the flash drives we all use. They use MLC (Multi-level cell).
In other words, instead of Logic 0 being 0 Volts, and Logic 1 being 5 Volts, the voltage range is divided into 4 or 8 pieces. Assume 4. This means, Logic 00 is 0, Logic 01 is 1.667 Volts, Logic 10 is 3.333 Volts, and Logic 11 is 5 Volts. However, notice that, your NOISE MARGIN went from a full 5 Volts down to 1.66 volts. The separation between two logic states is a lot less, so, you are much more susceptible to noise. So, you use ERROR CORRECTIONS CODES to detect and correct for the accidentally-changed bit states ...
==================
Second example is the PAM-5 coding they use in the Gigabit ethernet protocol.
In this coding, there are 5 different levels of a pulse. -2 Volts is 0, -1 Volts is 1, 0, +1, and +2 volts are the other 3 levels, with a total of 5 levels. But, you are now a lot more susceptible to noise on the ethernet lines, as compared to if you just used 0 or 1 as TWO states... Now you have 5 states. But, this doesn't seem to cause any problems, since on 4 different ethernet pairs, out of the potential 5^4=625 potential SYMBOLs, only 256 are VALID SYMBOLS, and this provides sufficient error correction to make it work.
===================
What Qaim described above seems to be using a similar idea ... As long as you can correct for errors, you can go ahead and pack more than one bit into a single bit's storage area ... MLC flash drives pack 2, 3 bits into a single bit ... I don't know if they are at 4 yet ...
Because computers use base 2 and hexadecimals are a compact and elegant way of writing binary. Now, why 16 and not 8 as the base? Because most of us have 10 fingers and are comfortable using base 10, so going back to 8 would be a bit of a waste.
Fernando, computer industry chose 4 bit groupings for another reason: with 4 bits, you can also represent a decimal digit with a single hexadecimal digit. The BCD (Binary Coded Decimal) was a very popular standard.
In fact, when you use PURE BINARY, some decimal numbers (like 0.1) cannot be perfectly represented. This is a huge problem for the financial industry, where 0.5 penny differences could sum up to huge amounts in bank transactions ... This is the reason why the DECIMAL concept got completely brought back in 2008 with the introduction of the IEEE 745-2008 floating point standard which completely incorporates BASE 10 (decimal) as an integral part of the standard ...
Notice, using 4 bits for each decimal digit is extremely wasteful (i.e., 0-15 range is being used to represent the 0-9 range, leaving 37% waste ! ) ...
But, in the IEEE 754-2008 standard, 10 binary digits is used to represent 3 decimal digits, using a range of (0...1023) to represent (0.999), with only a 2% waste, which is perfectly acceptable.
Tolga_Soyata, the question asked why we use hex for computer ADDRESSING. Any 'old' engineer who wired their own microcomputers and installed memory knows why we use hex for addressing. Apart from that, thank you for the extra insights on representations of numbers :-)
As all of you know out there, computers are binary machines working with 1's and 0's. Thus all computer words are combinations of 1's and 0's. For simplification and easiness of reading and communication we opt for hexdecimal by replacing every nibble (4 bits) by its hexadecimal equivalent being a digit from 0 to F. In this sense I agree with Peter Breuer.
Peter, here is the link for the PAM-5 coding: http://en.wikipedia.org/wiki/PAM-5
The voltage levels depend a little bit on implementation, but, the separation is exactly 5 ... and the coding is exactly what I described. Idea is this : HOW CAN I GET A RATE OF 1000 Mbps ON 4 WIRES OF CABLE RUNNING AT 125MHz and STILL HAVE SUFFICIENT NOISE TOLERANCE ?
ANSWER: ... 125MHz for each twisted wire pair = 125 Mbps ....
use 4 wire pairs = x4 .still at 500Mbps (still 2x short) !
To get another 2x, pack 8 bits into 4 wire pairs ... How can I do that and still have sufficient noise tolerance ? ANSWER: Use 5 different voltage levels on each wire pair ... This will give you 5^4=625 SYMBOLs to represent on 4 pairs, and you are only trying to send 256 symbols (i.e., 8 bits worth). So, you have the luxury of 369 additional symbols you can use for error correction and synchronization ...
If you think, this is cool, read 10GB standard ! It is a TWO DIMENSIONAL version of this PAM-5 code, called PAM-16, running at 500MHz ... Good stuff !
Again, apologies to everyone for going on major tangents !!!
I completely appreciate the way Peter is thinking about it ... Exactly how I was thinking before I discovered PAM-5 and PAM-16. ... EACH TWISTED PAIR CORRESPONDS TO A BIT ... OR TWO BITS ... In 10BASE-T and 100-BASE-T this is what they did. Each pair was one bit.
But, when you had to SQUISH a lot more information into the same channel, and, yet, at the same time, worry about NOISE, they realized that, it is a much better idea to consider all four pairs of 1000-BASE-T as one lump unit ... Because, each four-pair being transmitted really represents ONE SYMBOL and they will be affected from noise simultaneously and similarly anyway ... Coding all four pairs as one lumped unit allowed much richer alternatives for error coding ... Like PAM-5 coding ...
One simple reason of the success of hexadecimal values is that 4 bits are required to code numbers between 0 and 9. To optimize the coding possibilities with 4 bits, the hexadecimal code is perfect because it also includes the existing 10 numbers.
Dear Qaim Mehdi Rizvi , Computer hardware (digital logic) design is based on principle of 2- levels (binary options, base 2 system). Any base which has easily interpreted into base 2 is the simple reason for using Hexadecimal. Assembly language was the language whose interpreter was based on Hex to Bin and Bin to Hex.
Machine Code (instruction that a computer can execute directly) is a series of binary digits 1 and 0. It is hard to read for human beings. Hexadecimal numbers (numbers with base 16) provide a simply shorthand way of representing binary numbers (numbers with base 2), making programs much easier to code. Hexadecimal system is a more human readable version of binary system, since binary numbers can be converted to hexadecimal numbers by looking at only four digits at a time. For example, binary number 1111 0100 is equal (equivalent) to hexadecimal number F4, because: 1111 = F and 0100 = 4, where F = 15 decimal.
Hexadecimal system (2^4) enables most numbers to be recorded using substantially fewer digits than are necessarily required in binary system base 2^1. There is a very easy method of converting between hexadecimal and binary only by grouping of 4.
Machine language has base 2. Asthe integer n in any 2^ n base increses naturally facilitates computin. The assembly language was designed on principle of 2^4 - hexadecimal base.
Computer only understands binary language which is collection of 0's and 1's. That means ON/OFF. As in case of the human readability the binary number which may be representing some address or data has to be converted into human readable format. Hexadecimal is one of them. But the question can be why we have converted binary to HEX only why not decimal, octal etc. Answer is HEX is the one which can be easily converted with the least amount of overhead on both HW as well as SW. thats why we are using addresses as HEX. But internally they are used as binary only.
Hexadecimal is just a more compact and elegant way of writing values than binary, as Nebi Caka put it. I wonder what kind of world we would live in if we had 8 fingers in each hand though...
Machine language is a set of binary-coded instructions and data that are executed directly by central processing unit (CPU) of a computer. Each processor has an ‘instruction set’ i.e. a vocabulary of instructions it can understand. The instruction is often made up of 2 parts: the ‘operation code’ and the ‘operand address’.
Example 1: In program instructions represented as a sequence of bits:
0000 1001 / 0100 0000 / 0100 0010 / 0100 0100
– First 8 bits represent the ADD command i.e. operation code or opcode
– Second 8 bits represent first operand
– Third 8 bits represent second operand
– Fourth 8 bits used to store sum
Assembly language is a set of symbolic instruction codes that are meaningful abbreviations or mnemonics (simplified command words) followed by the needed data that correspond to machine language instructions and data. Mnemonic codes are used for the operation codes instead of binary and addresses are written in hexadecimal.
Example 2: mov eax, A = Move into register eax the contents of the location called A.
Binary numbers – numbers with base 2, i.e. 2n. Possible digits: 0, 1
Decimal numbers – numbers with base 10, i.e. 10n. Possible digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
Hexadecimal numbers – numbers with base 16, i.e. 16n. Possible digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A (= 10), B (= 11), C (= 12), D (= 13), E (= 14), F (= 15)
Main use of hexadecimal numbers in computing is for abbreviating lengthy binary numbers. Since almost all computer data sizes are multiples of 4, we can trivially convert binary numbers into hexadecimal numbers by replacing each group of four binary digits by a single hexadecimal digit.
Example 3: Binary number 1111 1010 1101 1000 is equivalent to hexadecimal FAD8
Joke:
There are only 10 kinds of people in this world. Those who know binary and those who don’t.
Actually, Intel came up with ihex later known to be hex files. Its them, who started programmable device (with due respect to Motorola) on commercial front and we then just followed... There is no particular answer for you question, if we can break the tradition!
The main reason why we use hexadecimal numbers is because it provides a more human-friendly representation and is much easier to express binary number representations in hex than it is in any other base number system.
Computers do not actually work in hex.
Lets take an example, using a byte.
1 Byte = 8 bits. It can store the values from 0 to 255 (0000 0000 to 1111 1111 in binary). Each hexadecimal digit represents four binary digits, also called Nibble. (1 Byte = 2 Nibbles)
For example, a single byte can have values ranging from 0000 0000 to 1111 1111 in binary form and can be easily represented as 00 to FF in hexadecimal.
Expressing numbers in binary is not easy for us. You can not tell your friend that my mobile number is 1001 1111 1010 0101. You cannot use these type of numbers daily for 'n' number of contacts. Thus, we need more easy expression.
Since a byte is 8 bits, it makes sense to divide that up into two groups, the top 4 bits and the low 4 bits. Since 4 bits gives you the possible range from 0 – 15, a base 16 system is easier to work with, especially if you are only familiar with alphanumeric characters.
It’s easier to express a binary value to another person as “B” then it is to express it as “1011”. This way I can simple use 2 hex values to represent a byte and have it work cleanly. This way if I am piss poor at math, I only need to memorize the multiplication tables up to 15. So if I have a hex value of EC, I can easily determine that 14 * 12 = 206 in decimal, and can easily write it out in binary as 1100 1110. Trying to convert from binary would require me to know what each place holder represents, and add all the values together (128 + 64 + 8 + 4 + 2 = 206). It’s much easier to work with binary through hex than any other base system.
There are several uses for hexadecimals in computing:
1. HTML / CSS Colour Codes
Hexadecimal numbers are used to represent colours within HTML or CSS.
The 6 digit hex colour code should be considered in three parts.
First two digits represents the amount of red in the colour (max FF, or 255)
The next two digits represent the amount of green in the colour (max FF, or 255)
The final two digits represent the amount of blue in the colour (max FF, or 255)
By changing the intensities of red, green and blue, we can create almost any colour.
E.g. orange can be represented as #FFA500, which is (255 red, 165 green, 0 blue). Visit hexinvaders.com to see this in action.
2. MAC Addresses
A Media Access Control (MAC) address is a number which uniquely identifies a device on the internet. It relates to the network interface card (NIC) inside of the device.
e.g. B4-CD-C7-4A-8B-D2
Expressing MAC addresses in hexadecimal format makes them easier to read and work with.
3.Assembly Code and Memory Dumps
Hexadecimals have advantage over binary due to:
They are easier and faster to work with, taking up less screen space
Mistakes are less likely and easier to trace/ debug
Finally
A big benefit of hexadecimals is that they are easy to convert to binary, if needed.
In above examples, all values are still physically stored as binary, so no storage space is saved by using hex.