Some questions people had in class #2

 

How is info, pictures, etc stored as numbers?

 

            The easiest to understand is characters. That we will discuss in class #3. It uses a code scheme, known as Unicode (which is an extension of the earlier ASCII), such that an ‘A’ is stored as a 65 (that is, 1000001 in binary), a ‘B’ is stored as a 66, etc. The last section in Chapter 1 addresses representing colors using numbers. A given color is represented by three numbers (one byte each – so values from 0-255) indicating how much red there is, how much green there is, and how much blue there is. The bigger answer is that some scheme must be developed for each kind of info.  Frequently a file extension (e.g. .jpg, .wav) represents a particular scheme that has been developed by a company, or by a committee who has developed a standard way of doing it.

 

How is the binary converted into the actual application software?

 

            This can be answered more than one way. The first answer is that the application software is converted into the binary! The application software is written in a language, such as C, C++, Java etc, and a program, called a compiler, translates the program, which has a lot of words and symbols in it, into a file that can be executed by the computer. This leads to the second way of answering this – how can a binary program run? The executable file is broken down into statements, each of which fits into a set size for the computer. The statement contains binary codes indicating 1) the operation to be done – and this is usually a very small part of the task – e.g. load a value into a fast part of memory, and 2) information to do the operation on (e.g. what value is to be loaded, and into what part of memory). This level of statement (machine language) may require 5-10 statements to accomplish the work of one of the programmer’s statements. The CPU is prepared for a certain set of these machine language instructions (called its instruction set), and follows those instructions.

 

Why aren’t there analog computers?

 

A good, thorough answer to this question can be found at http://www.mackido.com/Hardware/AnalogVsDigital.html

Their summary -  “It isn't that analog is bad, or that it can't be done -- some of the early computers, and some research computers have been analog. It is just that digital is simpler and faster -- which also means cheaper and more reliable.” They have a thorough explanation of why this is true. The quicky answer is that with binary digital equipment, you don’t have to use as much power, don’t have to wait as long for voltages to settle down before obtaining their value.

 

Why is a 64 bit system so large if it’s 8 bits to a byte?

 

The big deal is that every bit added doubles whatever it is that we are keeping track of. So for keeping track of numbers, a 64 bit system’s unit of info is 64 bits long, so it can hold numbers up to 2 64 – 1 which is almost 18.5 quintillion. A byte can hold numbers up to 2 8 – 1 = 255. But 9 bits can hold up to 511, 10 bits can hold up to 1023, 11 bits can hold up to 2047, 12 bits can hold up to 4095, … It really starts going up quickly. The number of bits also affects how many bytes of memory can effectively be addressed. With 8 bits, 256 locations can be identified. With 16 bits, 65, 536 locations can be identified (Bill Gates allegedly said in 1980 "64k should be enough for anyone",  - indicating that 64K of memory, 16 bit chips and Operating Systems built for them should be sufficient). With 32 bits, 4,294,967,296 locations (4GB) can be identified. With 64 bits, 18 some quintillion locations can be identified (with prefixes going mega, giga, tera, peta, exa, zeta, yota, this is 18 exabytes worth of memory).  Further, a 64 bit system also moves its unit of info all at once (64 bits on 64 wires), versus fewer bits on fewer wires on smaller systems – thus making it faster.

 

Are all computers built like the diagram shown?

 

No, this is a good picture of a personal computer – a one person computer. To give an obvious difference, supercomputers may be drastically different, with multiple CPUs and sometimes multiple memories.

 

How does long term memory store things without power?

Reference for some of this: www.howstuffworks.com. Long term memory, or storage, can be handled with different technologies. Most common is to store info magnetically. For instance the computer hard disk stores data on an electromagnetically charged surface or set of surfaces. In a hard disk, the magnetic recording material is layered onto a high-precision aluminum or glass disk.  Floppy disks store info magnetically on a metal coated surface. A floppy disk uses a thin plastic base material coated with iron oxide. This oxide is a ferromagnetic material, meaning that if you expose it to a magnetic field it is permanently magnetized by the field. Thus with a magnetic field it can record information. The energized write head puts data on the diskette by magnetizing minute, iron, bar-magnet particles embedded in the diskette surface. The magnetized particles have their north and south poles oriented in such a way that their pattern may be detected and read on a subsequent read operation. Later it can be erased or rewritten using magnetic field to change position of the metal. But left alone, the metal doesn’t change positions. Zip disks store info magnetically, and work similar to the above.  As with all magnetic disks, the particles affected by magnetic field do not change position in the absence of that field – whatever is written remains the same until some other write on the same area is done.

CDs and DVDs store data optically. Flat areas reflect laser light, and represent 1’s, tiny pits in the disk absorb laser light, and represent 0’s. Disk writes change the contours of the disk via laser. No optical disk will be changed except by a disk write, so data stored will not be changed or lost (only rewritable optical disks can have data be changed after initial write).

 

How does main memory work?

            Probably more than you wanted to know: http://www.howstuffworks.com/ram.htm/printable

 

Is there an easier way to figure out a (binary?) number rather than adding it mathematically?

            Actually, it is rare to have to convert numbers. When I was young, we worked with hexadecimal numbers, which are base 16, as an easier shorthand for binary (might be hard to explain why this works – 16 being an even power of 2 is part of it). Anyway, we frequently added and subtracted hexadecimal numbers from each other (we had calculators that worked with “hex”, but I learned to do it in my head), but rarely had to convert. With today’s symbolic debuggers, even that need becomes no more. Today, I believe it is important to understand that computers work with binary numbers, and some of the implications of that – powers of two show up everywhere – memory size, storage size, …. But actually doing binary math or conversions is not that important today. If I were going to do a lot of conversions, I would write a program to do it!!