User Tools

Site Tools


archive:resume_s1edu_1msc

Info: M.Sc. Thesis

M.Sc. by research at School of Electrical & Electronics Engineering, USM, specializing in Microelectronics (Integrated Circuit Design). Thesis submitted for evaluation in January 2001.

Thesis: Implementation of a Cascadable MLP Neural Processor with Sliding Feeder Technique

Abstract: The design of a 16-bit floating-point MLP neural processor is presented. It utilizes the sliding feeder technique, which reduces the complexity of common neural network interconnections. A new 16-bit floating-point data format is also introduced here. Its ability to match its 32-bit counterpart in calculating the MLP with BEP algorithm is quite remarkable. In a test to train a network to solve the linearly inseparable XOR logical function, the neural processor has successfully converge to an acceptable solution at the same number of training required by a processor using the standard single-precision value. The leading zero detection method has also been improved to save area consumption. The standard MLP with BEP algorithm has been restructured into an object-oriented type of algorithm. This is due to the fact that, instead of having all the data (weight, bias and node values) in a single memory heap, they are distributed among all the cascaded neural processors. This also enables the processor to accommodate the insertion of the sliding feeder technique. Some useful computer software – Code Generator, FPC Tool, and Neural Processor Simulator - has also been developed. All in all, they have contributed to the design, simulation and validation of the neural processor. The serial transmission circuit is only based on simple shift logic. The speed of the serial transmission only affects the network when data needs to be fed forward or backward between the layers. This is because, for data sliding, data transmission is done while the time consuming floating-point calculation takes place.

archive/resume_s1edu_1msc.txt · Last modified: by azman