Introduction to Types of Data in Computer Systems
In the realm of computer science, types of data form the fundamental building blocks of information representation. These structures define the nature of data, determining how it’s stored, processed, and utilized within computer systems. Understanding data types is pivotal for individuals diving into the world of computing, providing a foundational understanding of how information is handled.
Understanding Number Systems
Number systems serve as the language through which computers interpret and manipulate data. Various systems such as binary, decimal, octal, and hexadecimal are integral to computer architecture. Each system has its unique representation and plays a crucial role in the computations, encoding, and decoding of data.
In the previous post, we covered the topic classification of digital computers. In this post, we will also cover two topics, first, types of data and numbers systems and this post also grows your knowledge of basic computer skills. For people who are beginners in the realm of computers, this post will be very helpful.
Types of Data
We’ll discover today how computers use data. Computers employ a variety of different types of data. We will discuss the various sorts of data in this post.
A computer is considered to be a complex device capable of performing large calculations, it is essentially a data processing machine. Two basic types of data are stored & processed by computers, viz, numbers & characters. The list of these is below. You can review our previous posts about Data Information and Introduction
to computers.
- Character Data (consists of alphabets, numbers & special symbols)
- Numeric Data (consists of numbers)
Character Data (alphabets)
In the English language, there are twenty-six alphabets. Data in alphabetical order is represented by these 26 alphabets. Small letters from a to z, capital letters from A to Z, and empty spaces make up the word.
Example:
{A, B, C, D, E………Z) capital letters
{a,b, c, d, e……….z} small letters
Computers may be made to read a list of names, arrange them in alphabetical order, and print this arranged list. Thus a list of names such as Rehan, Kamal Khan, and Shahid read by an input unit would be arranged in alphabetical order as Kamal, Rehan, and Shahid. The data processed in this case are strings of characters, the strings being Shahid, Rehan, and Kamal.
The character and numeric data fed to the computer and the output obtained from the computer must be in a form that is usable by people. For this purpose natural language symbols (like A, B, d, g, *, +, /, <, >, etc.) and decimal digits (like 2,5,7,9, etc.) are appropriate. These constitute the external data representation.
On the other hand, the representation of data inside a computer must be in a form understandable by the computer i.e. in the strings of 0’s and 1’s. This type of data representation inside the computer is called internal data representation.
Character Data Representation
We know that character data is represented as strings of 0’s and 1’s i.e. the computer uses a binary numbers system to represent characters. The 2 symbols 0 & 1 are called BITS, an abbreviation for binary digits.
Let us now see how these bits are used to represent character data.
Suppose we begin by using two bits to represent characters.
It is possible to make 4 unique combinations of 2 bits. They are:
00 01 10 and 11
Now let us use each of these combinations to represent a character. We can represent only 4 characters through these four combinations.
Information processing using computers requires the processing of:
- 26 capital letters
- 26 small case letter
- 10 digits
- 32 special symbols like (! @, #, $, %, ^, &, *, (,), +, etc.)
- 34 non-printable characters like tab, end of line, end of page, etc.
- 128 graphic characters
Obviously, for representing all of these 256 characters uniquely four combinations are insufficient.
Let us see if combinations of 3 bits can represent these 256 characters. With three bits we can make eight (2 ^ 3) unique combinations. These 8 combinations are again insufficient to represent 256 characters.
If we go on making combinations of 4 bits, 5 bits, etc. we find that if we use strings of 8 bits each, we will have 2 ^ 8 = 256 unique strings & can thus represent all the 256 characters.
Now, the combination used to represent each of the 256 characters must be the same for all the computers. Otherwise, data used by one computer will not be understood by another computer.
Therefore, to facilitate the exchange of data between computers, the coding of characters has been standardized.
The most popular coding system is called ASCII (American Standard Code for Information Interchange). It used 8 bits to represent each character.
Example: The internal data representation of THINK is:
01010100 01001000 01001001 01001110 01001011
T H I N K
The appendix shows a chart of ASCII codes used for all the characters.
A string of bits used to represent a character is called a byte. Characters coded in ASCEE need 8 bits
for each character. Thus a byte is a string of 8 bits.
Unit for Measurement Capacity
Memory Unit |
Description |
8 bits |
1 byte |
1 KB (Kilobyte) |
1024 bytes |
1 MB (Megabyte) |
1024 KB |
1 GB (Gigabyte) |
1024 MB |
1 TB (Terabyte) |
1024 GB |
1 PB (Petabyte) |
1024 TB |
In the case of numeric data, the binary equivalent of the numeric data is used by the computer. Whereas, in the case
of character data the ASCII code of the character data is used by the computer. Sometimes, decimal digits are also considered character data.
For example, if “MZT 813” is the registration number of a scooter, then 813 becomes character data and not numeric data. This is because, here, 813 is being merely used as a symbol and does not have any value in the conventional sense. Therefore, 813 will be represented by ASCII code for 8, 1, and 3. However, if 813 is a person’s basic
salary then it has some value and therefore will be represented by its binary equivalent and not by ASCII code for 8, 1, and 3.
Numeric Data (numbers)
Numeric data consists of digits from {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} which are stored as up numeric data type. Numerical data is used to indicate any kind of measurement or count. There are different types of number systems used in numeric data. The binary system, decimal system, octal system, & hexadecimal system.
The other types of data are decimal nos. such as 47, 58, etc. Numbers are processed using arithmetic operations such as +, -, *, and /. The processing results in new values which can be printed on the printer.
Numeric Data Representation
As mentioned earlier external to the computers we use Decimal numbers (0 to 9) whereas internally the computer uses Binary numbers (0 to 1).
Therefore, to make the computer understand the data given to it in decimal numbers it, must be first be converted to binary form and then presented to the computer. This conversion is achieved by the computer as follows:
a. The given decimal number is divided by 2.
b. The quotient is again divided by 2.
c. Division is continued till the quotient becomes zero.
d. When the remainders obtained during these divisions are written in reverse order we get the binary equivalent of the given decimal number.
For example, if we want to find out how the decimal number 42 is internally represented in a computer, then the procedure for converting the decimal number 42 into its binary equivalent is as follows:
Once the numeric data is presented to the computer in the binary form it performs the operations on it and finds the result. This result is also in the binary form. However, the result cannot be presented to the user in binary form. It must be first converted into decimal form and then presented to the user.
Now let us see how this conversion from binary to decimal is performed.
To find the decimal value of a binary number 10110 we processed as follows:
(1 x 24 + 0 x 23 + 1 x 22 + 1 x 21 + 0 x 20)
= (16 + 0 + 4 + 2 + 0) = 2210
Each digit of the binary number is multiplied by an appropriate power of 2. All such numbers are then added to get the equivalent decimal number.
Have a go at this….
Find out the binary equivalents of 76 and 23.
What are the decimal equivalents of 1001100 and 10111?
Number System
We have already seen in the previous post that inside a computer system, data is stored in a format that cannot be easily read by human beings. This is the reason why input and output (I/O) interfaces are required. Every computer stores numbers, letters, and other special characters in a coded form. Before going into the details of these codes, it is essential to have a basic understanding of the number system.
So the goal of this post is to familiarize you with the fundamentals of many computer professionals and the relationship between them.
There are four numbers system given below:
- Binary Number System
- Decimal Number System
- Octal Number System
- Hexadecimal Number System
Binary Number System
The binary number system is exactly like the decimal system except that the base is 2 instead of 10. We have only two symbols or digits (0 and 1) that can be used in this number system. Note that the largest single digit is 1 (one less than the base). Again, each position in a binary number represents a power of the base (2). As such, in this system, the right position, the second position from the right is the 2’s (21) position and proceeding in this way we have 4’s (22) position, 8’s (23) position, 16’s (24) position, and so on. Thus, the decimal equivalent of the binary number 10101 (written as 101012) is
= (1 x 24) + (0 x 23) + (1 x 22) + (0 x 21) + (1 x 20)
= 16 + 0 + 4 + 0 + 1 = 21
Decimal Number System
The decimal number system, also known as the base-10 system, is the most commonly used number system in the world. It uses the digits 0-9 to represent numbers, with each digit representing a power of 10. For example, the
number “123” in the decimal system represents 1 hundred, 2 tens, and 3 ones.
Examples:
- The number “1234” in the decimal system represents 1 thousand, 2 hundred, 3 tens, and 4 ones.
- The number “567” in the decimal system represents 5 hundred, 6 tens, and 7 ones.
- The number “0.25” in the decimal system represents 2 tenths and 5 hundredths.
- The number “3.14” in the decimal system represents 3 ones and 14 thousandths.
- The number “100” in the decimal system represents 1 hundred.
Convert to a Decimal Number to Binary
To convert a decimal number to binary, we can use the method of repeated division by 2.
For example, let’s convert the decimal number 42 to binary:
Divide 42 by 2: 42 ÷ 2 = 21 with a remainder of 0.
- Write down the remainder (0) and divide 21 by 2: 21 ÷ 2 = 10 with a remainder of 1.
- Write down the remainder (1) and divide 10 by 2: 10 ÷ 2 = 5 with a remainder of 0.
- Write down the remainder (0) and divide 5 by 2: 5 ÷ 2 = 2 with a remainder of 1.
- Write down the remainder (1) and divide 2 by 2: 2 ÷ 2 = 1 with a remainder of 0.
- Write down the remainder (0) and divide 1 by 2: 1 ÷ 2 = 0 with a remainder of 1.
- The binary representation of 42 is 101010.
Alternatively, we can also use the method of bit shifting, where we take the decimal number and shift the bits to the right, dividing by 2 each time, and noting the remainder.
For example, 42 in decimal can also be represented as 101010 in binary.
Note: To convert decimal to binary, we need to divide the decimal number by 2 repeatedly and note down the remainder till the quotient becomes 0. The remainder obtained in the last step to the first step will be the binary equivalent of the decimal number.
Octal Number System
The octal number system, or base-8 system, uses the digits 0 through 7 to represent numbers. Each digit in an octal number represents a power of 8. For example, the octal number “12” is equivalent to 18^1 + 28^0 = 8 + 2 = 10 in decimal (base-10) notation. Similarly, the octal number “37” is equivalent to 38^1 + 78^0 = 24 + 7 = 31 in decimal notation.
Octal is commonly used in computing and digital systems because it provides a convenient way to represent binary data using only 3 digits instead of 8. For example, the binary number “11010101” can be represented as “315” in octal notation.
Examples:
Decimal to Octal:
(42)10 = (52)8
Octal to Decimal:
(52)8 = (42)10
Binary to Octal:
(11010101)2 = (315)8
Octal to Binary:
(315)8 = (11010101)2
Hexadecimal Number System
The hexadecimal number system, also known as base 16, uses 16 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. Each digit represents a power of 16. For example, the number F3A in hexadecimal is equivalent to 6,186 in decimal (base 10). The rightmost digit represents the one place, the next digit to the left represents the 16’s place, and the next represents the 256’s place, and so on.
Here are a few examples of converting hexadecimal to decimal:
F3A in hexadecimal is equal to 6,186 in decimal (F * 16^2 + 3 * 16^1 + A * 16^0)
2A3 in hexadecimal is equal to 675 in decimal (2 * 16^2 + 10 * 16^1 + 3 * 16^0)
B1 in hexadecimal is equal to 177 in decimal (11 * 16^1 + 1 * 16^0)
Here are a few examples of converting decimal to hexadecimal:
6,186 in decimal is equal to F3A in hexadecimal
675 in decimal is equal to 2A3 in hexadecimal
177 in decimal is equal to B1 in hexadecimal
Conclusion
We have created this post for the newbies who are prepared for the number system. It is designed to develop basic computer skills to enhance your knowledge. Also, you can review other posts about basic computer skills. If there is any lack in this post or if liked it, please comment to us. Thanks
For further guidance see our following posts: