ASCII vs UNICODE
Definition and Explanation
The American Standard Code for Information Interchange (ASCII) is a digital code originally based on the English Alphabet. ASCII codes represent numbers, letters, sentences and text in computers, communication equipment, and other devices that use text. Most modern computer codes are based on ASCII, though they support many additional letters and numbers. ASCII includes definitions for 128 characters: 33 are non-printing characters. that affect how text and space is processed and 95 printable characters, including the space bar. ASCII uses 8 bit binary to communicate with the computer and has a maximum of 256 combinations. ASCII uses a specific binary combination to represent every different letter, capital letter, number and symbol. E.g... 0100 1010 = J and 0110 1010 = j. So every case and letter has a different binary combination.
Unicode is a computer code representation and is used in most of the world's writing systems. Developed in conjunction with the Universal Character Set standard and published in book form as The Unicode Standard, the latest version of Unicode consists of more than 110,000 characters covering 100 scripts, a set of code charts for visual reference, an encoding methodology and set of standard English character, it can code many character properties such as upper and lower case, a set of reference data computer files, and a number of related items, such as character properties, as of September 2012, the most recent version is Unicode 6.2. Unicode used 16 bit binary to communicate with the computer and has a maximum of 110,000 combination and is much more advanced than ASCII. UNICODE uses a specific binary combination to represent every different letter, capital letter, number and symbol. E.g... 0045 = E and 0065 = e. UNICODE uses a HEXIDECIMAL code to shorten down the 16 bit binary.