Monday, May 3, 2010

Class XI Question with answer

What is a Translator? Explain its various types.
A translator is a program which is used to convert the source program into machine understandable form is called Translator.
There are three types of translator they are as follow:
a) Assembler
b) Compiler
c) Interpreter

Assembler: Assembler is a program which translates an assembly language program into a machine language program is called assembler.

Compiler: A program which translates a high level language into machine language is called a compiler. A compiler is more intelligent than assemblers. It checks all kinds of limits, ranges, error etc. But program execution time is more, occupies a larger part of memory and faster than interpreter. For example: TC.EXE, JAVAC.EXE etc.

Interpreter: An interpreter is a program which translates a statement of high level language program into machine codes. An interpreter is a smaller program as compared to compiler and slower than compiler. For Example: GWBASIC.EXE, JAVA.EXE etc.

What is Machine Level Language? Mention its advantages and disadvantages.

Programming language that can be directly understood and obeyed by a machine (computer) without conversion (translation). Different for each type of CPU, it is the native binary language (comprised of only two characters: 0 and 1) of the computer and is difficult to be read and understood by humans. Programmers commonly use more English-like languages (called high level languages) such as Basic, C, Java, etc., to write programs which are then translated into machine language (called a low level language) by an assembler, compiler, or interpreter.

Advantages and disadvantages of MLL are as follows:
Advantages
a) Time taken to execute program is less.
b) The program written in this language need not to be translated.

Disadvantages
a) Difficult to learn
b) It is a machine oriented language.
c) The knowledge of computer internal architecture is essential for program coding.

Saturday, April 24, 2010

Class XI computer notes new one

What is an operating system?
An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded into the computer by a boot program, manages all the other programs in a computer. The other programs are called applications or application programs. The application programs make use of the operating system by making requests for services through a defined application program interface (API). In addition, users can interact directly with the operating system through a user interface such as a command language or a graphical user interface (GUI).
An operating system performs these services for applications:
In a multitasking operating system where multiple programs can be running at the same time, the operating system determines which applications should run in what order and how much time should be allowed for each application before giving another application a turn.
It manages the sharing of internal memory among multiple applications.
It handles input and output to and from attached hardware devices, such as hard disks, printers, and dial-up ports.
It sends messages to each application or interactive user (or to a system operator) about the status of operation and any errors that may have occurred.
It can offload the management of what are called batch jobs (for example, printing) so that the initiating application is freed from this work.
On computers that can provide parallel processing, an operating system can manage how to divide the program so that it runs on more than one processor at a time.
All major computer platforms (hardware and software) require and sometimes include an operating system. Linux, Windows 2000, VMS, OS/400, AIX, and z/OS are all examples of operating systems.

Types of Operating System
An operating system is a software component of a computer system that is responsible for the management of various activities of the computer and the sharing of computer resources. It hosts the several applications that run on a computer and handles the operations of computer hardware. Users and application programs access the services offered by the operating systems, by means of system calls and application programming interfaces. Users interact with operating systems through Command Line Interfaces (CLIs) or Graphical User Interfaces known as GUIs. In short, operating system enables user interaction with computer systems by acting as an interface between users or application programs and the computer hardware. Here is an overview of the different types of operating systems.Real-time Operating System: It is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main object of real-time operating systems is their quick and predictable response to events. They either have an event-driven or a time-sharing design. An event-driven system switches between tasks based of their priorities while time-sharing operating systems switch tasks based on clock interrupts.Multi-user and Single-user Operating Systems: The operating systems of this type allow a multiple users to access a computer system concurrently. Time-sharing system can be classified as multi-user systems as they enable a multiple user access to a computer through the sharing of time. Single-user operating systems, as opposed to a multi-user operating system, are usable by a single user at a time. Being able to have multiple accounts on a Windows operating system does not make it a multi-user system. Rather, only the network administrator is the real user. But for a Unix-like operating system, it is possible for two users to login at a time and this capability of the OS makes it a multi-user operating system.Multi-tasking and Single-tasking Operating Systems: When a single program is allowed to run at a time, the system is grouped under a single-tasking system, while in case the operating system allows the execution of multiple tasks at one time, it is classified as a multi-tasking operating system. Multi-tasking can be of two types namely, pre-emptive or co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. MS Windows prior to Windows 95 used to support cooperative multitasking.Distributed Operating System: An operating system that manages a group of independent computers and makes them appear to be a single computer is known as a distributed operating system. The development of networked computers that could be linked and communicate with each other, gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system.Embedded System: The operating systems designed for being used in embedded computer systems are known as embedded operating systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Windows CE, FreeBSD and Minix 3 are some examples of embedded operating systems.The operating systems thus contribute to the simplification of the human interaction with the computer hardware. They are responsible for linking application programs with the hardware, thus achieving an easy user access to the computers.
Functions Of Operating System Today most operating systems perform the following important functions:1. Processor management, that is, assignment of processor to different tasks being performed by the computer system.2. Memory management, that is, allocation of main memory and other storage areas to the system programmes as well as user
programmes and data.3. Input/output management, that is, co-ordination and assignment of the different output and input device while one or more
programmes are being executed.4. File management, that is, the storage of file of various storage devices to another. It also allows all files to be easily changed and
modified through the use of text editors or some other files manipulation routines. 5. Establishment and enforcement of a priority system. That is, it determines and maintains the order in which jobs are to be
executed in the computer system.6. Automatic transition from job to job as directed by special control statements.7. Interpretation of commands and instructions. 8. Coordination and assignment of compilers, assemblers, utility programs, and other software to the various user of the computer
system.9. Facilities easy communication between the computer system and the computer operator (human). It also establishes data
security and integrity.
What is Batch Processing?
Batch processing is execution of a series of programs ("jobs") on a computer without manual intervention.
Batch jobs are set up so they can be run to completion without manual intervention, so all input data is preselected through scripts or command-line parameters. This is in contrast to "online" or interactive programs which prompt the user for such input. A program takes a set of data files as input, process the data, and produces a set of output data files. This operating environment is termed as "batch processing" because the input data are collected into batches on files and are processed in batches by the program.

Common batch processing usage

Data processing
A typical batch processing procedure is End of day-reporting (EOD), especially on mainframes. Historically systems were designed to have a batch window where online subsystems were turned off and system capacity was used to run jobs common to all data (accounts, users or customers) on a system. In a bank, for example, EOD jobs include interest calculation, generation of reports and data sets to other systems, print (statements) and payment processing.

Printing
A popular computerized batch processing procedure is printing. This normally involves the operator selecting the documents they need printed and indicating to the batch printing software when, where they should be output and priority of the print job. Then the job is sent to the print queue from where printing daemon sends them to the printer.

Databases
Batch processing is also used for efficient bulk database updates and automated transaction processing, as contrasted to interactive online transaction processing (OLTP) applications.

Images
Batch processing is often used to perform various operations with digital images. There exist computer programs that let one resize, convert, watermark, or otherwise edit image files.

Converting
Batch processing is also used for converting a number of computer files from one format to another. This is to make files portable and versatile especially for proprietary and legacy files where viewers are not easy to come by.

Job scheduling
UNIX utilizes cron and at facilities to allow for scheduling of complex job scripts. Windows has a job scheduler. Most high-performance computingclusters use batch processing to maximize cluster usage.

buffer

A buffer is a data area shared by hardware devices or program processes that operate at different speeds or with different sets of priorities. The buffer allows each device or process to operate without being held up by the other. In order for a buffer to be effective, the size of the buffer and the algorithms for moving data into and out of the buffer need to be considered by the buffer designer. Like a cache, a buffer is a "midpoint holding place" but exists not so much to accelerate the speed of an activity as to support the coordination of separate activities.

This term is used both in programming and in hardware. In programming, buffering sometimes implies the need to screen data from its final intended place so that it can be edited or otherwise processed before being moved to a regular file or database.

Spooling
Spooling is the process of a sending data to a spool, or temporary storage area in the computer's memory. This data may contain files or processes. Like a spool of thread, the data can build up within the spool as multiple files or jobs are sent to it. However, unlike a spool of thread, the first jobs sent to the spool are the first ones to be processed (FIFO, not LIFO).The most common type of spooling is print spooling, where print jobs are sent to a print spool before being transmitted to the printer. For example, when you print a document from within an application, the document data is spooled to a temporary storage area while the printer warms up. As soon as the printer is ready to print the document, the data is sent from the spool to the printer and the document is printed.Print spooling gets its name from technology used in the 1960s, when print jobs were stored on large reels of magnetic tape. The data from these reels was physically spooled to electrostatic printers, which printed the output saved to the tape.
Acronym for simultaneous peripheral operations on-line,spooling refers to putting jobs in a buffer, a special area in memory or on a disk where a device can access them when it is ready. Spooling is useful because devices access data at different rates. The buffer provides a waiting station where data can rest while the slower device catches up.
The most common spooling application is print spooling. In print spooling, documents are loaded into a buffer (usually an area on a disk), and then the printer pulls them off the buffer at its own rate. Because the documents are in a buffer where they can be accessed by the printer, you can perform other operations on the computer while the printing takes place in the background. Spooling also lets you place a number of print jobs on a queue instead of waiting for each one to finish before specifying the next one.

Virtual Memory
Virtual Memory is a feature of an operating system that enables a process to use a memory (RAM) address space that is independent of other processes running in the same system, and use a space that is larger than the actual amount of RAM present, temporarily relegating some contents from RAM to a disk, with little or no overhead.
In a system using virtual memory, the physical memory is divided into equally-sized pages. The memory addressed by a process is also divided into logical pages of the same size. When a process references a memory address, the memory manager fetches from disk the page that includes the referenced address, and places it in a vacant physical page in the RAM. Subsequent references within that logical page are routed to the physical page. When the process references an address from another logical page, it too is fetched into a vacant physical page and becomes the target of subsequent similar references.
If the system does not have a free physical page, the memory manager swaps out a logical page into the swap area - usually a paging file on disk (in WindowsXP: pagefile.sys), and copies (swaps in) the requested logical page into the now-vacant physical page. The page swapped out may belong to a different process. There are many strategies for choosing which page is to be swapped out. (One is LRU: the Least Recently Used page is swapped out.) If a page is swapped out and then is referenced, it is swapped back in, from the swap area, at the expense of another page.
Virtual memory enables each process to act as if it has the whole memory space to itself, since the addresses that it uses to reference memory are translated by the virtual memory mechanism into different addresses in physical memory. This allows different processes to use the same memory addresses - the memory manager will translate references to the same memory address by two different processes into different physical addresses. One process generally has no way of accessing the memory of another process. A process may use an address space larger than the available physical memory, and each reference to an address will be translated into an existing physical address. The bound on the amount of memory that a process may actually address is the size of the swap area, which may be smaller than the addressable space. (A process can have an address space of 4GB yet actually use only 2GB, and this can run on a machine with a pagefile of 2GB.)
The size of the virtual memory on a system is smaller than the sum of the physical RAM and the swap area, since pages that are swapped in are not erased from the swap area, and so take up two pages of the sum of sizes.
Usually under Windows, the size of the swap area is 1.5 times the size of the RAM.
The virtual memory manager might issue the message:"Your system is low on virtual memory. Windows is increasing the size of your virtual memory paging file." This happens if it is required to swap out a page from RAM to the pagefile while all pages in the pagefile are already taken. With that message, it will allocate more space to the pagefile and use the added space to store the newly- swapped-out page (and subsequent pages).
One case that might cause the system to want to enlarge the pagefile is that too many processes are running. In this case, only relatively little of each process' memory can fit in the RAM, and relatively many pages reside in the pagefile. In this case, the virtual memory system is required to hold too many pages - a "small" number of pages per process times "many" processes. In such a case it is also likely that the system will run slowly, since pages need to be swapped in and out more frequently.
Another case that might cause the system to want to enlarge the pagefile is that a process has a memory leak. There, the process is occupying a lot of unused memory, which will likely be swapped away in the pagefile and never be swapped in.
The usage of the pagefile in Windows can be seen in the Task Manager by checking the Virtual Memory Size box after View, Select Columns.
The pagefile can be fragmented, and tools exist to defrag it.

What is SWAP?
To replace pages or segments of data in memory. Swapping is a useful technique that enables a computer to execute programs and manipulate data files larger than main memory. The operating system copies as much data as possible into main memory, and leaves the rest on the disk. When the operating system needs data from the disk, it exchanges a portion of data (called a page or segment ) in main memory with a portion of data on the disk.
DOS does not perform swapping, but most other operating systems, including OS/2, Windows, and UNIX, do.
Swapping is often called paging or Virtual memory.
Interrupt
In computing, an interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution.
A hardware interrupt causes the processor to save its state of execution and begin execution of an interrupt handler.
Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt.
Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven.[1]

Interrupt
An interrupt is a signal from a device attached to a computer or from a program within the computer that causes the main program that operates the computer (the operating system ) to stop and figure out what to do next. Almost all personal (or larger) computers today are interrupt-driven - that is, they start down the list of computer instruction s in one program (perhaps an application such as a word processor) and keep running the instructions until either (A) they can't go any further or (B) an interrupt signal is sensed. After the interrupt signal is sensed, the computer either resumes running the program it was running or begins running another program.
Basically, a single computer can perform only one computer instruction at a time. But, because it can be interrupted, it can take turns in which programs or sets of instructions that it performs. This is known as multitasking . It allows the user to do a number of different things at the same time. The computer simply takes turns managing the programs that the user effectively starts. Of course, the computer operates at speeds that make it seem as though all of the user's tasks are being performed at the same time. (The computer's operating system is good at using little pauses in operations and user think time to work on other programs.)
An operating system usually has some code that is called an interrupt handler . The interrupt handler prioritizes the interrupts and saves them in a queue if more than one is waiting to be handled. The operating system has another little program, sometimes called a scheduler , that figures out which program to give control to next.
In general, there are hardware interrupts and software interrupts. A hardware interrupt occurs, for example, when an I/O operation is completed such as reading some data into the computer from a tape drive. A software interrupt occurs when an application program terminates or requests certain services from the operating system. In a personal computer, a hardware interrupt request ( IRQ ) has a value associated with it that associates it with a particular device.

What is thread?
In computer science, a thread of execution results from a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.
On a single processor, multithreading generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, the threads or tasks will generally run at the same time, with each processor or core running a particular thread or task.
Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. The kernel of an operating system allows programmers to manipulate threads via the system call interface. Some implementations are called a kernel thread, whereas a lightweight process (LWP) is a specific type of kernel thread that shares the same state and information.
Programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, performing a sort of ad-hoctime-slicing.

What CPU Scheduling?
Scheduling is a key concept in computer multitasking, multiprocessingoperating system and real-time operating system designs. Scheduling refers to the way processes are assigned to run on the available CPUs, since there are typically many more processes running than there are available CPUs. This assignment is carried out by softwares known as a scheduler and dispatcher.
The scheduler is concerned mainly with:
CPU utilization - to keep the CPU as busy as possible.
Throughput - number of processes that complete their execution per time unit.
Turnaround - total time between submission of a process and its completion.
Waiting time - amount of time a process has been waiting in the ready queue.
Response time - amount of time it takes from when a request was submitted until the first response is produced.
Fairness - Equal CPU time to each thread.

What is Programming Language?
A programming language is an artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine, to express algorithms precisely, or as a mode of human communication.

What is Machine Language?
Machine code or machine language is a system of instructions and data executed directly by a computer's central processing unit. Machine code may be regarded as a primitive (and cumbersome) programming language or as the lowest-level representation of a compiled and/or assembledcomputer program. Programs in interpreted languages[1] are not represented by machine code however, although their interpreter (which may be seen as a processor executing the higher level program) often is. Machine code is sometimes called native code when referring to platform-dependent parts of language features or libraries.[2] Machine code should not be confused with so called "bytecode", which is executed by an interpreter.

What is Assembly Language?
A much more readable rendition of machine language, called assembly language, uses mnemonic codes to refer to machine code instructions, rather than simply using the instructions' numeric values. For example, on the Zilog Z80 processor, the machine code 00000101, which causes the CPU to decrement the Bprocessor register, would be represented in assembly language as DEC B.

What is High level Language?
A high-level programming language is a programming language with strong abstraction from the details of the computer. In comparison to low-level programming languages, it may use natural language elements, be easier to use, or be more portable across platforms. Such languages hide the details of CPU operations such as memory access models and management of scope.
This greater abstraction and hiding of details is generally intended to make the language user-friendly, as it includes concepts from the problem domain instead of those of the machine used. A high-level language isolates the execution semantics of a computer architecture from the specification of the program, making the process of developing a program simpler and more understandable with respect to a low-level language. The amount of abstraction provided defines how "high-level" a programming language is.[1]
The first high-level programming language to be designed for a computer was Plankalkül, created by KonradZuse. However, it was not implemented in his time and his original contributions were isolated from other developments.

What is 4th generation language?
A fourth-generation programming language (1970s-1990) (abbreviated 4GL) is a programming language or programming environment designed with a specific purpose in mind, such as the development of commercial business software[1]. In the evolution of computing, the 4GL followed the 3GL in an upward trend toward higher abstraction and statement power. The 4GL was followed by efforts to define and use a 5GL.
The natural-language, block-structured mode of the third-generation programming languages improved the process of software development. However, 3GL development methods can be slow and error-prone. It became clear that some applications could be developed more rapidly by adding a higher-level programming language and methodology which would generate the equivalent of very complicated 3GL instructions with fewer errors. In some senses, software engineering arose to handle 3GL development. 4GL and 5GL projects are more oriented toward problem solving and systems engineering.

First Generation Computer Language
A first-generation programming language is a machine-level programming language.
Originally, no translator was used to compile or assemble the first-generation language. The first-generation programming instructions were entered through the front panel switches of the computer system.
The main benefit of programming in a first-generation programming language is that the code a user writes can run very fast and efficiently, since it is directly executed by the CPU. However, machine language is a lot more difficult to learn than higher generational programming languages, and it is far more difficult to edit if errors occur. In addition, if instructions need to be added into memory at some location, then all the instructions after the insertion point need to be moved down to make room in memory to accommodate the new instructions. Doing so on a front panel with switches can be very difficult. Furthermore, portability is significantly reduced - in order to transfer code to a different computer it needs to be completely rewritten since the machine language for one computer could be significantly different from another computer. Architectural considerations make portability difficult too. For example, the number of registerson one CPU architecture could differ from those of another.

Second Generation Programming Language
Second-generation programming language is a generational way to categorize assembly languages. The term was coined to provide a distinction from higher level third-generation programming languages (3GL) such as COBOL and earlier machine code languages. Second-generation programming languages have the following properties:
The code can be read and written by a programmer. To run on a computer it must be converted into a machine readable form, a process called assembly.
The language is specific to a particular processor family and environment.
Second-generation languages are sometimes used in kernels and device drivers (though C is generally employed for this in modern kernels), but more often find use in extremely intensive processing such as games, video editing, graphic manipulation/rendering.
One method for creating such code is by allowing a compiler to generate a machine-optimized assembly language version of a particular function. This code is then hand-tuned, gaining both the brute-force insight of the machine optimizing algorithm and the intuitive abilities of the human optimizer.

Third Generation Computer Language
A third-generation programming language (3GL) is a refinement of a second-generation programming language. The second generation of programming languages brought logical structure to software. The third generation brought refinements to make the languages more programmer-friendly. This includes features like good support for aggregate data types, and expressing concepts in a way that favors the programmer, not the computer (e.g. no longer needing to state the length of multi-character (string) literals in Fortran). A third generation language improves over a second generation language by having the computer take care of non-essential details, not the programmer. High level language is a synonym for third-generation programming language.
First introduced in the late 1950s, Fortran, ALGOL, and COBOL are early examples of this sort of language.
Most popular general-purpose languages today, such as C, C++, C#, Java, and Python, are also third-generation languages.
Most 3GLs support structured programming.

Fourth Generation Computer Language
Forth is a structured, imperative, reflective, stack-basedcomputerprogramming language and programming environment. Forth is sometimes spelled in all capital letters following the customary usage during its earlier years, although the name is not an acronym.
A procedural programming language without type checking, Forth features both interactive execution of commands (making it suitable as a shell for systems that lack a more formal operating system) and the ability to compile sequences of commands for later execution. Some Forth implementations (usually early versions or those written to be extremely portable) compile threaded code, but many implementations today generate optimizedmachine code like other language compilers.
Although not as popular as other programming systems, Forth has enough support to keep several language vendors and contractors in business. Forth is currently used in boot loaders such as Open Firmware, space applications,[1] and other embedded systems. Gforth, an implementation of Forth by the GNU Project is actively maintained, the last release in December 2008. The 1994 standard is currently undergoing revision, provisionally titled Forth 200x.

Fifth Generation computer language
A fifth-generation programming language (abbreviated 5GL) is a programming language based around solving problems using constraints given to the program, rather than using an algorithm written by a programmer. Most constraint-based and logic programming languages and some declarative languages are fifth-generation languages.
While fourth-generation programming languages are designed to build specific programs, fifth-generation languages are designed to make the computer solve a given problem without the programmer. This way, the programmer only needs to worry about what problems need to be solved and what conditions need to be met, without worrying about how to implement a routine or algorithm to solve them. Fifth-generation languages are used mainly in artificial intelligence research. Prolog, OPS5, and Mercury are examples of fifth-generation languages.
These types of languages were also built upon Lisp, many originating on the Lisp machine, such as ICAD. Then, there are many frame languages, such as KL-ONE.
In the 1990s, fifth-generation languages were considered to be the wave of the future, and some predicted that they would replace all other languages for system development, with the exception of low-level languages.[citation needed] Most notably, from 1982 to 1993 Japan[1][2] put much research and money into their fifth generation computer systems project, hoping to design a massive computer network of machines using these tools.
However, as larger programs were built, the flaws of the approach became more apparent. It turns out that, starting from a set of constraints defining a particular problem, deriving an efficient algorithm to solve it is a very difficult problem in itself. This crucial step cannot yet be automated and still requires the insight of a human programmer.
Today, fifth-generation languages are back as a possible level of computer language. A number of software vendors currently claim that their software meets the visual "programming" requirements of the 5GL concept.

What is Flow Chart?
Flow charts are easy-to-understand diagrams showing how steps in a process fit together. This makes them useful tools for communicating how processes work, and for clearly documenting how a particular job is done. Furthermore, the act of mapping a process out in flow chart format helps you clarify your understanding of the process, and helps you think about where the process can be improved.
A flow chart can therefore be used to:
Define and analyze processes;
Build a step-by-step picture of the process for analysis, discussion, or communication; and
Define, standardize or find areas for improvement in a process
Also, by conveying the information or processes in a step-by-step flow, you can then concentrate more intently on each individual step, without feeling overwhelmed by the bigger picture.
How to Use the Tool:
Most flow charts are made up of three main types of symbol:
Elongated circles, which signify the start or end of a process;

Rectangles, which show instructions or actions; and
Diamonds, which show decisions that must be made
Within each symbol, write down what the symbol represents. This could be the start or finish of the process, the action to be taken, or the decision to be made.
Symbols are connected one to the other by arrows, showing the flow of the process.

Modular programming
Modular programming is a software design technique that increases the extent to which software is composed from separate parts, called modules. Conceptually, modules represent a separation of concerns, and improve maintainability by enforcing logical boundaries between components.[1] Modules are typically incorporated into the program through interfaces. A module interface expresses the elements that are provided and required by the module. The elements defined in the interface are detectable by other modules. The implementation contains the working code that corresponds to the elements declared in the interface.

Structure programming
It is possible to do structured programming in any programming language, though it is preferable to use something like a procedural programming language. Since about 1970 when structured programming began to gain popularity as a technique, most new procedural programming languages have included features to encourage structured programming (and sometimes have left out features that would make unstructured programming easy). Some of the better known structured programming languages are ALGOL, Pascal, PL/I and Ada.

What is object Oriented Programming?
Object-oriented programming (OOP) is a programming paradigm that uses "objects" – data structures consisting of datafields and methods together with their interactions – to design applications and computer programs. Programming techniques may include features such as data abstraction, encapsulation, modularity, polymorphism, and inheritance. It was not commonly used in mainstream software application development until the early 1990s.

What is Pseudocode?
Pseudocode is a compact and informal high-level description of a computer programmingalgorithm that uses the structural conventions of a programming language, but is intended for human reading rather than machine reading. Pseudocode typically omits details that are not essential for human understanding of the algorithm, such as variable declarations, system-specific code and subroutines. The programming language is augmented with natural languagedescriptions of the details, where convenient, or with compact mathematical notation. The purpose of using pseudocode is that it is easier for humans to understand than conventional programming language code, and that it is a compact and environment-independent description of the key principles of an algorithm. It is commonly used in textbooks and scientific publications that are documenting various algorithms, and also in planning of computer program development, for sketching out the structure of the program before the actual coding takes place.
No standard for pseudocode syntax exists, as a program in pseudocode is not an executable program. Pseudocode resembles, but should not be confused with, skeleton programs including dummy code, which can be compiled without errors. Flowcharts can be thought of as a graphical alternative to pseudocode.

What is Decision Table?
Decision tables are a precise yet compact way to model complicated logic. Decision tables, like if-then-else and switch-case statements, associate conditions with actions to perform. But, unlike the control structures found in traditional programming languages, decision tables can associate many independent conditions with several actions in an elegant way
Decision tables are typically divided into four quadrants, as shown below.
The four quadrants
Conditions
Condition alternatives
Actions
Action entries
Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to. Many decision tables include in their condition alternatives thedon't care symbol, a hyphen. Using don't cares can simplify decision tables, especially when a given condition has little influence on the actions to be performed. In some cases, entire conditions thought to be important initially are found to be irrelevant when none of the conditions influence which actions are performed
A technical support company writes a decision table to diagnose printer problems based upon symptoms described to them over the phone from their clients. The following is a balanced decision table.
Printer troubleshooter
Rules
Conditions
Printer does not print
Y
Y
Y
Y
N
N
N
N
A red light is flashing
Y
Y
N
N
Y
Y
N
N
Printer is unrecognized
Y
N
Y
N
Y
N
Y
N
Actions
Check the power cable


X





Check the printer-computer cable
X

X





Ensure printer software is installed
X

X

X

X

Check/replace ink
X
X


X
X


Check for paper jam

X

X




Of course, this is just a simple example (and it does not necessarily correspond to the reality of printer troubleshooting), but even so, it demonstrates how decision tables can scale to several conditions with many possibilities.

What Decision Tree?
A decision tree (or tree diagram) is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal. Another use of decision trees is as a descriptive means for calculating conditional probabilities.

What is a database?
A database is a collection of information that is organized so that it can easily be accessed, managed, and updated. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images.
database management system


A database management system (DBMS), sometimes just called a database manager, is a program that lets one or more computer users create and access data in a database. The DBMS manages user requests (and requests from other programs) so that users and other programs are free from having to understand where the data is physically located on storage media and, in a multi-user system, who else may also be accessing the data. In handling user requests, the DBMS ensures the integrity of the data (that is, making sure it continues to be accessible and is consistently organized as intended) and security (making sure only those with access privileges can access the data). The most typical DBMS is a relational database management system (RDBMS). A standard user and program interface is the Structured Query Language (SQL). A newer kind of DBMS is the object-oriented database management system (ODBMS).

Types of Databases
There are several common types of databases; each type of database has its own data model (how the data is structured). They include; Flat Model, Hierarchical Model, Relational Model and Network Model.

The Flat Model Database
In a flat model database, there is a two dimensional (flat structure) array of data. For instance, there is one column of information and within this column it is assumed that each data item will be related to the other. For instance, a flat model database includes only zip codes. Within the database, there will only be one column and each new row within that one column will be a new zip code.

The Hierarchical Model Database
The hierarchical model database resembles a tree like structure, such as how Microsoft Windows organizes folders and files. In a hierarchical model database, each upward link is nested in order to keep data organized in a particular order on a same level list. For instance, a hierarchal database of sales, may list each days sales as a separate file. Within this nested file are all of the sales (same types of data) for the day.

The Network Model
In a network model, the defining feature is that a record is stored with a link to other records - in effect networked. These networks (or sometimes referred to as pointers) can be a variety of different types of information such as node numbers or even a disk address.

The Relational Model
The relational model is the most popular type of database and an extremely powerful tool, not only to store information, but to access it as well. Relational databases are organized as tables. The beauty of a table is that the information can be accessed or added without reorganizing the tables. A table can have many records and each record can have many fields.
Tables are sometimes called a relation. For instance, a company can have a database called customer orders, within this database will be several different tables or relations all relating to customer orders. Tables can include customer information (name, address, contact, info, customer number, etc) and other tables (relations) such as orders that the customer previously bought (this can include item number, item description, payment amount, payment method, etc). It should be noted that every record (group of fields) in a relational database has its own primary key. A primary key is a unique field that makes it easy to identify a record.
Relational databases use a program interface called SQL or Standard Query Language. SQL is currently used on practically all relational databases. Relational databases are extremely easy to customize to fit almost any kind of data storage. You can easily create relations for items that you sell, employees that work for your company, etc.

Accessing Information Using a Database
While storing data is a great feature of databases, for many database users the most important feature is quick and simple retrieval of information. In a relational database, it is extremely easy to pull up information regarding an employee, but relational databases also add the power of running queries. Queries are requests to pull specific types of information and either show them in their natural state or create a report using the data. For instance, if you had a database of employees and it included tables such as salary and job description, you can easily run a query of which jobs pay over a certain amount. No matter what kind of information you store on your database, queries can be created using SQL to help answer important questions.

Storing a Database
Databases can be very small (less than 1 MB) or extremely large and complicated (terabytes as in many government databases), however all databases are usually stored and located on hard disk or other types of storage devices and are accessed via computer. Large databases may require separate servers and locations, however many small databases can fit easily as files located on your computer's hard drive.

Securing a Database
Obviously, many databases store confidential and important information that should not be easily accessed by just anyone. Many databases require passwords and other security features in order to access the information. While some databases can be accessed via the internet through a network, other databases are closed systems and can only be accessed on site.
Database - Advantages & Disadvantages

Advantages
Reduced data redundancy
Reduced updating errors and increased consistency
Greater data integrity and independence from applications programs
Improved data access to users through use of host and query languages
Improved data security
Reduced data entry, storage, and retrieval costs
Facilitated development of new applications program
Disadvantages
Database systems are complex, difficult, and time-consuming to design
Substantial hardware and software start-up costs
Damage to database affects virtually all applications programs
Extensive conversion costs in moving form a file-based system to a database system
Initial training required for all programmers and users

Components of a database
Last week saw the start of what a relational database is about - and databases in general. This week takes it one stage further by looking at the component parts of what makes up a database. Some of you may be sitting there thinking, "Why do I need to know this? What's the point?" The point is so that you can expand your knowledge through books etc. When you do, you will come across the "Jargon" so it's important to know what these things are.

Tables, Columns and Rows
These three items form the building blocks of a database. They store the data that we want to save in our database.

Columns
Columns are akin to fields, that is, individual items of data that we wish to store. A customer's name, the price of a part, the date of an invoice are all examples of columns. They are also similar to the columns found in spreadsheets (the A, B, C etc along the top).

Rows
Rows are akin to records as they contain data of multiple columns (like the 1,2,3etc in a spreadsheet). Unlike file records though, it is possible to extract only the columns you want to make up a row of data. Old "records" that computers read forced the computer to read EVERYTHING, even if you only wanted a tiny portion of the record. In databases, a row can be made up of as many or as few columns as you want. This makes reading data much more efficient - you fetch what you want.

Tables
A table is a logical group of columns. For example, you may have a table that stores details of customers' names and addresses. Another table would be used to store details of parts and yet another would be used for supplier's names and addresses.
It is the tables that make up the entire database and it is important that we do not duplicate data at all. Only keys would duplicate (and even then, on some tables - these would be unique).

Keys
Keys are used to relate one table for another. For example. A customer places an order for some parts. We need to store the customer's details, the parts ordered and the supplier of the parts (to ensure we have enough stock or place a new order to restock).