Technology

present software technlogies

Introduction

A type of display screen that has a touch-sensitive transparent panel covering the screen. Instead of using a pointing device such as a mouse or light pen, you can use your finger to point directly to objects on the screen.
Although touch screens provide a natural interface for computer novices, they are unsatisfactory for most applications because the finger is such a relatively large object. It is impossible to point accurately to small areas of the screen. In addition, most users find touch screens tiring to the arms after long use.

Touch-screens are typically found on larger displays, in phones with integrated PDA features. Most are designed to work with either your finger or a special stylus. Tapping a specific point on the display will activate the virtual button or feature displayed at that location on the display.Some phones with this feature can also recognize handwriting written on the screen using a stylus, as a way to quickly input lengthy or complex information

A touchscreen is an input device that allows users to operate a PC by simply touching the display screen. Touch input is suitable for a wide variety of computing applications. A touchscreen can be used with most PC systems as easily as other input devices such as track balls or touch pads. Browse the links below to learn more about touch input technology and how it can work for you.

History Of Touch Screen Technology
A touch screen is a special type of visual display unit with a screen which is sensitive to pressure or touching. The screen can detect the position of the point of touch. The design of touch screens is best for inputting simple choices and the choices are programmable. The device is very user-friendly since it 'talks' with the user when the user is picking up choices on the screen.

Touch technology turns a CRT, flat panel display or flat surface into a dynamic data entry device that replaces both the keyboard and mouse. In addition to eliminating these separate data entry devices, touch offers an "intuitive" interface. In public kiosks, for example, users receive no more instruction than 'touch your selection.
Specific areas of the screen are defined as "buttons" that the operator selects simply by touching them. One significant advantage to touch screen applications is that each screen can be customized to reflect only the valid options for each phase of an operation, greatly reducing the frustration of hunting for the right key or function.

Pen-based systems, such as the Palm Pilot® and signature capture systems, also use touch technology but are not included in this article. The essential difference is that the pressure levels are set higher for pen-based systems than for touch.Touch screens come in a wide range of options, from full color VGA and SVGA monitors designed for highly graphic Windows® or Macintosh® applications to small monochrome displays designed for keypad replacement and enhancement.

Specific figures on the growth of touch screen technology are hard to come by, but a 1995 study last year by Venture Development Corporation predicted overall growth of 17%, with at least 10% in the industrial sector.Other vendors agree that touch screen technology is becoming more popular because of its ease-of-use, proven reliability, expanded functionality, and decreasing cost.

Introduction

Voice (and fax) service over cable networks is known as cable-based Internet Protocol (IP) telephony. Cable based IP telephony holds the promise of simplified and consolidated communication services provided by a single carrier at a lower cost than consumers currently to pay to separate Internet, television and telephony service providers. Cable operators have already worked through the technical challenges of providing Internet service and optimizing the existing bandwidth in their cable plants to deliver high speed Internet access. Now, cable operators have turned their efforts to the delivery of integrated Internet and voice service using that same cable spectrum.

Cable based IP telephony falls under the broad umbrella of voice over IP (VoIP), meaning that many of the challenges that telecom carriers facing cable operators are the same challenges that telecom carriers face as they work to deliver voice over ATM (VoATM) and frame-relay networks. However, ATM and frame-relay services are targeted primarily at the enterprise, a decision driven by economics and the need for service providers to recoup their initial investments in a reasonable amount of time. Cable, on the other hand, is targeted primarily at home. Unlike most businesses, the overwhelming majority of homes in the United States is passed by cable, reducing the required up-front infrastructure investment significantly.

Cable is not without competition in the consumer market, for digital subscriber line (xDSL) has emerged as the leading alternative to broadband cable. However, cable operators are well positioned to capitalize on the convergence trend if they are able to overcome the remaining technical hurdles and deliver telephony service that is comparable to the public switched telephone system.

In the case of cable TV, each television signal is given a 6-megahertz (MHz, millions of cycles per second) channel on the cable. The coaxial cable used to carry cable television can carry hundreds of megahertz of signals -- all the channels we could want to watch and more.

In a cable TV system, signals from the various channels are each given a 6-MHz slice of the cable's available bandwidth and then sent down the cable to your house. In some systems, coaxial cable is the only medium used for distributing signals. In other systems, fibre-optic cable goes from the cable company to different neighborhoods or areas. Then the fiber is terminated and the signals move onto coaxial cable for distribution to individual houses.
When a cable company offers Internet access over the cable, Internet information can use the same cables because the cable modem system puts downstream data -- data sent from the Internet to an individual computer -- into a 6-MHz channel. On the cable, the data looks just like a TV channel. So Internet downstream data takes up the same amount of cable space as any single channel of programming. Upstream data -- information sent from an individual back to the Internet -- requires even less of the cable's bandwidth, just 2 MHz, since the assumption is that most people download far more information than they upload.

Putting both upstream and downstream data on the cable television system requires two types of equipment: a cable modem on the customer end and a cable modem termination system (CMTS) at the cable provider's end. Between these two types of equipment, all the computer networking, security and management of Internet access over cable television is put into place.

Definition

The Y2K38 problem has been described as a non-problem, given that we are expected to be running 64-bit operating systems well before 2038. Well, maybe.

The Problem
Just as Y2K problems arise from programs not allocating enough digits to the year, Y2K38 problems arise from programs not allocating enough bits to internal time.Unix internal time is commonly stored in a data structure using a long int containing the number of seconds since 1970. This time is used in all time-related processes such as scheduling, file timestamps, etc. In a 32-bit machine, this value is sufficient to store time up to 18-jan-2038. After this date, 32-bit clocks will overflow and return erroneous values such as 32-dec-1969 or 13-dec-1901.

Machines Affected Currently (March 1998) there are a huge number of machines affected. Most of these will be scrapped before 2038. However, it is possible that some machines going into service now may still be operating in 2038. These may include process control computers, space probe computers, embedded systems in traffic light controllers, navigation systems etc. etc. Many of these systems may not be upgradeable. For instance, Ferranti Argus computers survived in service longer than anyone expected; long enough to present serious maintenance problems.

Note: Unix time is safe for the indefinite future for referring to future events, provided that enough bits are allocated. Programs or databases with a fixed field width should probably allocate at least 48 bits to storing time values.
Hardware, such as clock circuits, which has adopted the Unix time convention, may also be affected if 32-bit registers are used.

In my opinion, the Y2K38 threat is more likely to result in aircraft falling from the sky, glitches in life-support systems, and nuclear power plant meltdown than the Y2K threat, which is more likely to disrupt inventory control, credit card payments, pension plans etc. The reason for this is that the Y2K38 problem involves the basic system timekeeping from which most other time and date information is derived, while the Y2K problem (mostly) involves application programs.
Emulation and Megafunctions
While 32-bit CPUs may be obsolete in desktop computers and servers by 2038, they may still exist in microcontrollers and embedded circuits. For instance, the Z80 processor is still available in 1999 as an Embedded Function within Altera programmable devices. Such embedded functions present a serious maintenance problem for Y2K38 and similar rollover issues, since the package part number and other markings typically give no indication of the internal function.

Software Issues
Databases using 32-bit Unix time may survive through 2038. Care will have to be taken to avoid rollover issues.

Now that we've far surpassed the problem of "Y2K," can you believe that computer scientists and theorists are now projecting a new worldwide computer glitch for the year 2038? Commonly called the "Y2K38 Problem," it seems that computers using "long int" time systems, which were set up to start recording time from January 1, 1970 will be affected.

Definition

As XML becomes a predominant means of linking blocks of information together, there is a requirement to secure specific information. That is to allow authorized entities access to specific information and prevent access to that specific information from unauthorized entities. Current methods on the Internet include password protection, smart card, PKI, tokens and a variety of other schemes. These typically solve the problem of accessing the site from unauthorized users, but do not provide mechanisms for the protection of specific information from all those who have authorized access to the site.

Now that XML is being used to provide searchable and organized information there is a sense of urgency to provide a standard to protect certain parts or elements from unauthorized access. The objective of XML encryption is to provide a standard methodology that prevents unauthorized access to specific information within an XML document.

XML (Extensible Markup Language) was developed by an XML Working Group (originally known as the SGML Editorial Review Board) formed under the auspices of the World Wide Web Consortium (W3C) in 1996. Even though there was HTML, DHTML AND SGML XML was developed byW3C to achieve the following design goals.

" XML shall be straightforwardly usable over the Internet.
" XML shall be compatible with SGML.
" It shall be easy to write programs, which process XML documents.
" The design of XML shall be formal and concise.
" XML documents shall be easy to create.

XML was created so that richly structured documents could be used over the web. The other alternate is HTML and SGML are not practical for this purpose.HTML comes bound with a set of semantics and does not provide any arbitrary structure. Even though SGML provides arbitrary structure, it is too difficult to implement just for web browser. Since SGML is so comprehensive that only large corporations can justify the cost of its implementations.

The eXtensible Markup Language, abbreviated as XML, describes a class of data objects called XML documents and partially describes the behavior of computer programs which process them. Thus XML is a restricted form of SGML

A data object is an XML document if it is well-formed, as defined in this specification. A well-formed XML document may in addition be valid if it meets certain further constraints.Each XML document has both a logical and a physical structure. Physically, the document is composed of units called entities. An entity may refer to other entities to cause their inclusion in the document. A document begins in a "root" or document entity

Definition

Unicode provides a unique number for every character,
no matter what the platform,
no matter what the program,
no matter what the language.

Fundamentally, computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters: for example, the European Union alone requires several different encodings to cover all its languages. Even for a single language like English no single encoding was adequate for all the letters, punctuation, and technical symbols in common use.

These encoding systems also conflict with one another. That is, two encodings can use the same number for two different characters, or use different numbers for the same character. Any given computer (especially servers) needs to support many different encodings; yet whenever data is passed between different encodings or platforms, that data always runs the risk of corruption.

This paper is intended for software developers interested in support for the Unicode standard in the Solaris™ 7 operating environment. It discusses the following topics:

" An overview of multilingual computing, and how Unicode and the internationalization framework in the Solaris 7 operating environment work together to achieve this aim
" The Unicode standard and support for it within the Solaris operating environment
" Unicode in the Solaris 7 Operating Environment
" How developers can add Unicode support to their applications
" Codeset conversions

Unicode And Multilingual Computing

It is not a new idea that today's global economy demands global computing solutions. Instant communications and the free flow of information across continents - and across computer platforms - characterize the way the world has been doing business for some time. The widespread use of the Internet and the arrival of electronic commerce (e-commerce) together offer companies and individuals a new set of horizons to explore and master. In the global audience, there are always people and businesses at work - 24 hours of the day, 7 days a week. So global computing can only grow.

What is new is the increasing demand of users for a computing environment that is in harmony with their own cultural and linguistic requirements. Users want applications and file formats that they can share with colleagues and customers an ocean away, application interfaces in their own language, and time and date displays that they understand at a glance. Essentially, users want to write and speak at the keyboard in the same way that they always write and speak. Sun Microsystems addresses these needs at various levels, bringing together the components that make possible a truly multilingual computing environment.

Definition

Mobile computing devices have changed the way we look at computing. Laptops and personal digital assistants (PDAs) have unchained us from our desktop computers. A group of researchers at AT&T Laboratories Cambridge are preparing to put a new spin on mobile computing. In addition to taking the hardware with you, they are designing a ubiquitous networking system that allows your program applications to follow you wherever you go.

By using a small radio transmitter and a building full of special sensors, your desktop can be anywhere you are, not just at your workstation. At the press of a button, the computer closest to you in any room becomes your computer for as long as you need it. In addition to computers, the Cambridge researchers have designed the system to work for other devices, including phones and digital cameras. As we move closer to intelligent computers, they may begin to follow our every move.

The essence of mobile computing is that a user's applications are available, in a suitably adapted form, wherever that user goes. Within a richly equipped networked environment such as a modern office the user need not carry any equipment around; the user-interfaces of the applications themselves can follow the user as they move, using the equipment and networking resources available. We call these applications Follow-me applications.

Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

Context-Aware Application

A context-aware application is one which adapts its behaviour to a changing environment. Other examples of context-aware applications are 'construction-kit computers' which automatically build themselves by organizing a set of proximate components to act as a more complex device, and 'walk-through videophones' which automatically select streams from a range of cameras to maintain an image of a nomadic user. Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

The platform we describe has five main components:
1. A fine-grained location system, which is used to locate and identify objects.
2. A detailed data model, which describes the essential real world entities that are involved in mobile applications.
3. A persistent distributed object system, which presents the data model in a form accessible to applications.
4. Resource monitors, which run on networked equipment and communicate status information to a centralized repository.
5. A spatial monitoring service, which enables event-based location-aware applications.
Finally, we describe an example application to show how this platform may be used.

Definition

Tripwire is a reliable intrusion detection system. It is a software tool that checks to see what has changed in your system. It mainly monitors the key attribute of your files, by key attribute we mean the binary signature, size and other related data. Security and operational stability must go hand in hand, if the user does not have control over the various operations taking place then naturally the security of the system is also compromised. Tripwire has a powerful feature which pinpoints the changes that has taken place, notifies the administrator of these changes, determines the nature of the changes and provide you with information you need for deciding how to manage the change.

Tripwire Integrity management solutions monitor changes to vital system and configuration files. Any changes that occur are compared to a snapshot of the established good baseline. The software detects the changes, notifies the staff and enables rapid recovery and remedy for changes. All Tripwire installation can be centrally managed. Tripwire software's cross platform functionality enables you to manage thousands of devices across your infrastructure.

Security not only means protecting your system against various attacks but also means taking quick and decisive actions when your system is attacked. First of all we must find out whether our system is attacked or not, earlier system logs were certainly handy. You can see evidences of password guessing and other suspicious activities. Logs are ideal for tracing steps of the cracker as he tries to penetrate into the system. But who has the time and the patience to examine the logs on a daily basis?

Penetration usually involves a change of some kind, like a new port has been opened or a new service. The most common change you can see is that a file has changed. If you can identify the key subsets of these files and monitor them on a daily basis, then we will be able to detect whether any intrusion took place. Tripwire is an open source program created to monitor the changes in a key subset of files identified by the user and report on any changes in any of those files. When changes made are detected, the system administrator is informed. Tripwire 's principle is very simple, the system administrator identifies key files and causes tripwire to record checksum for those files.

He also puts in place a cron job, whose job is to scan those files at regular intervals (daily or more frequently), comparing to the original checksum. Any changes, addition or deletion, are reported to the administrator. The administrator will be able to determine whether the changes were permitted or unauthorized changes. If it was the earlier case the n the database will be updated so that in future the same violation wouldn't be repeated. In the latter case then proper recovery action would be taken immediately.
Tripwire For Servers

Tripwire for Servers is a software that is exclusively used by servers. This software can be installed on any server that needs to be monitored for any changes. Typical servers include mail servers, web servers, firewalls, transaction server, development server etc, Any server where it is imperative to identity if and when a file system change has occurred should b monitored with tripwire for servers. For the tripwire for servers software to work two important things should be present - the policy file and the database.

The tripwire for Servers software conducts subsequent file checks, automatically comparing the state of the system with the baseline database. Any inconsistencies are reported to the Tripwire Manager and to the host system log file. Reports can also be emailed to an administrator. If a violation is an authorized change, a user can update the database so changes no longer show up as violations.

Definition

From its origin more than 25 years ago, Ethernet has evolved to meet the increasing demands of packet-switched networks. Due to its proven low implementation cost, its known reliability, and relative simplicity of installation and maintenance, its popularity has grown to the point that today nearly all traffic on the Internet originates or ends with an Ethernet connection. Further, as the demand for ever-faster network speeds has grown, Ethernet has been adapted to handle these higher speeds and the concomitant surges in volume demand that accompany them.

The One Gigabit Ethernet standard is already being deployed in large numbers in both corporate and public data networks, and has begun to move Ethernet from the realm of the local area network out to encompass the metro area network. Meanwhile, an even faster 10 Gigabit Ethernet standard is nearing completion. This latest standard is being driven not only by the increase in normal data traffic but also by the proliferation of new, bandwidth-intensive applications.

The draft standard for 10 Gigabit Ethernet is significantly different in some respects from earlier Ethernet standards, primarily in that it will only function over optical fiber, and only operate in full-duplex mode, meaning that collision detection protocols are unnecessary. Ethernet can now step up to 10 gigabits per second, however, it remains Ethernet, including the packet format, and the current capabilities are easily transferable to the new draft standard.

In addition, 10 Gigabit Ethernet does not obsolete current investments in network infrastructure. The task force heading the standards effort has taken steps to ensure that 10 Gigabit Ethernet is interoperable with other networking technologies such as SONET. The standard enables Ethernet packets to travel across SONET links with very little inefficiency.

Ethernet's expansion for use in metro area networks can now be expanded yet again onto wide area networks, both in concert with SONET and also end-to-end Ethernet. With the current balance of network traffic today heavily favoring packet-switched data over voice, it is expected that the new 10 Gigabit Ethernet standard will help to create a convergence between networks designed primarily for voice, and the new data centric networks.
10 Gigabit Ethernet Technology Overview

The 10 Gigabit Ethernet Alliance (10GEA) was established in order to promote standards-based 10 Gigabit Ethernet technology and to encourage the use and implementation of 10 Gigabit Ethernet as a key networking technology for connecting various computing, data and telecommunications devices. The charter of the 10 Gigabit Ethernet Alliance includes:

" Supporting the 10 Gigabit Ethernet standards effort conducted in the IEEE 802.3 working group

" Contributing resources to facilitate convergence and consensus on technical specifications

" Promoting industry awareness, acceptance, and advancement of the 10 Gigabit Ethernet standard

" Accelerating the adoption and usage of 10 Gigabit Ethernet products and services

" Providing resources to establish and demonstrate multi-vendor interoperability and generally encourage and promote interoperability and interoperability events

Definition

Scientist all over routinely generate large volumes of data from both computational and laboratory experiments. Such data, which are irreproducible and expensive to regenerate, must be safely archived for future reference and research. The archived data form and the point at which users archive it are matters of individual preference. Usually scientists store data using multiple platforms. Further, not only do scientists expect their data to stay in the archive despite personnel changes, they expect those responsible for the archive to deal with the storage technology changes without those changes affecting either the scientist or their work.

Essentially, we require a data-intensive computing environment that works seamlessly across scientific disciplines. Ideally that environment should provide all of the file system features. Research indicates that supporting this type of massive data management requires some form of Meta -data to catalog and organize the data.

Problems Identified

National Sciences Digital Library has implemented metadata previously and has find it necessary to restrict metadata to a specific format. The Scientific Archive Management System, a metadata based archive for scientific data has provided flexible archival storage for very large databases. SAM uses metadata to organize and manage the data without imposing predefined metadata formats on scientist. SAM's ability to handle different data and metadata types provides a key difference between it and many other archives.

Restrictions imposed by SAM:

It can readily accommodate any type of data file regardless of format, content or domain. The system makes no assumptions about data format, the platform on which the user generated the file, the file's content, or even the metadata's content. SAM requires only that the user have data files to store and will allow the storage of some metadata about each data file.
Working at the metadata level also avoids unnecessary data retrieval from the archive, which can be time- consuming depending on the files size, network connectivity or archive storage medium. SAM software hides system complexity while making it easy to add functionality and augment storage capacity as demand increases.

About SAM

SAM came into existence in 1995 by EMSL - Environmental Molecular Science laboratory. In 2002, EMSL migrated the original two-server hierarchical storage management system to an incrementally extensible collection of linux - based disk firms. The metadata- centric architecture and the original decision to present the archive to users as a single large file system made the hardware migration a relation file system made the hardware migration a relatively painless process.

Definition

MPEG is the famous four-letter word which stands for the "Moving Pictures Experts Groups.
To the real word, MPEG is a generic means of compactly representing digital video and audio signals for consumer distributionThe essence of MPEG is its syntax: the little tokens that make up the bitstream. MPEG's semantics then tell you (if you happen to be a decoder, that is) how to inverse represent the compact tokens back into something resembling the original stream of samples.

These semantics are merely a collection of rules (which people like to called algorithms, but that would imply there is a mathematical coherency to a scheme cooked up by trial and error….). These rules are highly reactive to combinations of bitstream elements set in headers and so forth.

MPEG is an institution unto itself as seen from within its own universe. When (unadvisedly) placed in the same room, its inhabitants a blood-letting debate can spontaneously erupt among, triggered by mere anxiety over the most subtle juxtaposition of words buried in the most obscure documents. Such stimulus comes readily from transparencies flashed on an overhead projector. Yet at the same time, this gestalt will appear to remain totally indifferent to critical issues set before them for many months.

It should therefore be no surprise that MPEG's dualistic chemistry reflects the extreme contrasts of its two founding fathers: the fiery Leonardo Chairiglione (CSELT, Italy) and the peaceful Hiroshi Yasuda (JVC, Japan). The excellent byproduct of the successful MPEG Processes became an International Standards document safely administered to the public in three parts: Systems (Part), Video (Part 2), and Audio (Part 3).

Pre MPEG
Before providence gave us MPEG, there was the looming threat of world domination by proprietary standards cloaked in syntactic mystery. With lossy compression being such an inexact science (which always boils down to visual tweaking and implementation tradeoffs), you never know what's really behind any such scheme (other than a lot of the marketing hype).
Seeing this threat… that is, need for world interoperability, the Fathers of MPEG sought help of their colleagues to form a committee to standardize a common means of representing video and audio (a la DVI) onto compact discs…. and maybe it would be useful for other things too.

MPEG borrowed a significantly from JPEG and, more directly, H.261. By the end of the third year (1990), a syntax emerged, which when applied to represent SIF-rate video and compact disc-rate audio at a combined bitrate of 1.5 Mbit/sec, approximated the pleasure-filled viewing experience offered by the standard VHS format.

After demonstrations proved that the syntax was generic enough to be applied to bit rates and sample rates far higher than the original primary target application ("Hey, it actually works!"), a second phase (MPEG-2) was initiated within the committee to define a syntax for efficient representation of broadcast video, or SDTV as it is now known (Standard Definition Television), not to mention the side benefits: frequent flier miles

Definition

Sockets are interfaces that can "plug into" each other over a network. Once so "plugged in", the programs so connected communicate. A "server" program is exposed via a socket connected to a certain /etc/services port number. A "client" program can then connect its own socket to the server's socket, at which time the client program's writes to the socket are read as stdin to the server program, and stdout from the server program are read from the client's socket reads.

Before a user process can perform I/O operations, it calls Open to specify and obtain permissions for the file or device to be used. Once an object has been opened, the user process makes one or more calls to Read or Write data. Read reads data from the object and transfers it to the user process, while Write transfers data from the user process to the object. After all transfer operations are complete, the user process calls Close to inform the operating system that it has finished using that object.

When facilities for InterProcess Communication (IPC) and networking were added, the idea was to make the interface to IPC similar to that of file I/O. In Unix, a process has a set of I/O descriptors that one reads from and writes to. These descriptors may refer to files, devices, or communication channels (sockets). The lifetime of a descriptor is made up of three phases: creation (open socket), reading and writing (receive and send to socket), and destruction (close socket).

History
Sockets are used nearly everywhere, but are one of the most severely misunderstood technologies around. This is a 10,000 foot overview of sockets. It's not really a tutorial - you'll still have work to do in getting things working. It doesn't cover the fine points (and there are a lot of them), but I hope it will give you enough background to begin using them decently.I'm only going to talk about INET sockets, but they account for at least 99% of the sockets in use. And I'll only talk about STREAM sockets - unless you really know what you're doing (in which case this HOWTO isn't for you!), you'll get better behavior and performance from a STREAM socket than anything else. I will try to clear up the mystery of what a socket is, as well as some hints on how to work with blocking and non-blocking sockets. But I'll start by talking about blocking sockets. You'll need to know how they work before dealing with non-blocking sockets.

Part of the trouble with understanding these things is that "socket" can mean a number of subtly different things, depending on context. So first, let's make a distinction between a "client" socket - an endpoint of a conversation, and a "server" socket, which is more like a switchboard operator. The client application (your browser, for example) uses "client" sockets exclusively; the web server it's talking to uses both "server" sockets and "client" sockets.
Of the various forms of IPC (Inter Process Communication), sockets are by far the most popular. On any given platform, there are likely to be other forms of IPC that are faster, but for cross-platform communication, sockets are about the only game in town.

They were invented in Berkeley as part of the BSD flavor of Unix. They spread like wildfire with the Internet. With good reason -- the combination of sockets with INET makes talking to arbitrary machines around the world unbelievably easy (at least compared to other schemes).

Definition

The Cisco IOS Firewall, provides robust, integrated firewall and intrusion detection functionality for every perimeter of the network. Available for a wide range of Cisco IOS software-based routers, the Cisco IOS Firewall offers sophisticated security and policy enforcement for connections within an organization (intranet) and between partner networks (extranets), as well as for securing Internet connectivity for remote and branch offices.

A security-specific, value-add option for Cisco IOS Software, the Cisco IOS Firewall enhances existing Cisco IOS security capabilities, such as authentication, encryption, and failover, with state-of-the-art security features, such as stateful, application-based filtering (context-based access control), defense against network attacks, per user authentication and authorization, and real-time alerts.

The Cisco IOS Firewall is configurable via Cisco ConfigMaker software, an easy-to-use Microsoft Windows 95, 98, NT 4.0 based software tool.

A Firewall is a network security device that ensures that all communications attempting to cross it meet an organization's security policy. Firewalls track and control communications deciding whether to allow ,reject or encrypt communications.Firewalls are used to connect a corporate local network to the Internet and also within networks. In other words they stand in between the trusted network and the untrusted network.

The first and most important decision reflects the policy of how your company or organization wants to operate the system. Is the firewall in place to explicitly deny all services except those critical to the mission of connecting to the net, or is the firewall is in place to provide a metered and audited method of 'Queuing' access in a non-threatening manner. The second is what level of monitoring, reducing and control do you want? Having established the acceptable risk level you can form a checklist of what should be monitored, permitted and denied. The third issue is financial.
Implementation methods

Two basic methods to implement a firewall are
1.As a Screening Router:
A screening router is a special computer or an electronic device that screens (filters out) specific packets based on the criteria that is defined. Almost all current screening routers operate in the following manner.
a. Packet Filter criteria must be stored for the ports of the packet filter device. The packet filter criteria are called packet filter ruler.
b. When the packets arrive at the port, the packet header is parsed. Most packet filters examine the fields in only the IP, TCP and UDP headers.
c. The packet filter rules are stored in a specific order. Each rule is applied to the packet in the order in which the packet filter is stored.
d. If the rule blocks the transmission or reception of a packet the packet is not allowed.
e. If the rule allows the transmission or reception of a packet the packet is allowed.
f. If a packet does not satisfy any rule it is blocked.

Definition

Seeing the technical difficulties in cranking higher clock speed out of the present single core processors, dual core architecture has started to establish itself as the answer to the development of future processors. With the release of AMD dual core opteron and Intel Pentium Extreme edition 840, the month of April 2005 officially marks the beginning of dual core endeavors for both companies.

The transition from a single core to dual core architecture was triggered by a couple of factors. According to Moore's Law, the number of transistors (complexity) on a microprocessor doubles approximately every 18 months. The latest 2 MB Prescott core possesses more than 160 million transistors; breaking the 200 million mark is just a matter of time. Transistor count is one of the reasons that drive the industry toward the dual core architecture. Instead of using the available astronomically high transistor counts to design a new, more complex single core processor that would offer higher performance than the present offerings, chip makers have decided to put these transistors to use in producing two identical yet independent cores and combining them in to a single package.

To them, this is actually a far better use of the available transistors, and in return should give the consumers more value for their money. Besides, with the single core's thermal envelope being pushed to its limit and severe current leakage issues that have hit the silicon manufacturing industry ever since the transition to 90 nm chip fabrication, it's extremely difficult for chip makers (particulary Intel) to squeeze more clock speed out of the present single core design. Pushing for higher clock speeds is not a feasible option at present because of transistor current leakage. And adding more features into the core will increase the complexity of the design and make it harder to manage. These are the factors that have made the dual core option the more viable alternative in making full use of the amount of transistors available.

What is a dual core processor?
A dual core processor is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one. In a single-core or traditional processor the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far slower than the speed of the CPU. The situation is compounded when multi-tasking. In this case the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted and performance suffers.

In a dual core processor each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.
To utilize a dual core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threadi0ng technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT the software will only recognize one core. Adobe Photoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.

An attractive value of dual core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the average user the difference in performance will be most noticeable in multi-tasking until more software is SMT aware. Servers running multiple dual core processors will see an appreciable increase in performance.

Definition

Nanorobotics is an emerging field that deals with the controlled manipulation of objects with nanometer-scale dimensions. Typically, an atom has a diameter of a few Ångstroms (1 Å = 0.1 nm = 10-10 m), a molecule's size is a few nm, and clusters or nanoparticles formed by hundreds or thousands of atoms have sizes of tens of nm. Therefore, Nanorobotics is concerned with interactions with atomic- and molecular-sized objects-and is sometimes called Molecular Robotics.

Molecular Robotics falls within the purview of Nanotechnology, which is the study of phenomena and structures with characteristic dimensions in the nanometer range. The birth of Nanotechnology is usually associated with a talk by Nobel-prize winner Richard Feynman entitled "There is plenty of room at the bottom", whose text may be found in [Crandall & Lewis 1992]. Nanotechnology has the potential for major scientific and practical breakthroughs.

Future applications ranging from very fast computers to self-replicating robots are described in Drexler's seminal book [Drexler 1986]. In a less futuristic vein, the following potential applications were suggested by well-known experimental scientists at the Nano4 conference held in Palo Alto in November 1995:

" Cell probes with dimensions ~ 1/1000 of the cell's size
" Space applications, e.g. hardware to fly on satellites
" Computer memory
" Near field optics, with characteristic dimensions ~ 20 nm
" X-ray fabrication, systems that use X-ray photons
" Genome applications, reading and manipulating DNA
" Nanodevices capable of running on very small batteries
" Optical antennas

Nanotechnology is being pursued along two converging directions. From the top down, semiconductor fabrication techniques are producing smaller and smaller structures-see e.g. [Colton & Marrian 1995] for recent work. For example, the line width of the original Pentium chip is 350 nm. Current optical lithography techniques have obvious resolution limitations because of the wavelength of visible light, which is in the order of 500 nm. X-ray and electron-beam lithography will push sizes further down, but with a great increase in complexity and cost of fabrication. These top-down techniques do not seem promising for building nanomachines that require precise positioning of atoms or molecules.

Alternatively, one can proceed from the bottom up, by assembling atoms and molecules into functional components and systems. There are two main approaches for building useful devices from nanoscale components. The first is based on self-assembly, and is a natural evolution of traditional chemistry and bulk processing-see e.g. [Gómez-López et al. 1996]. The other is based on controlled positioning of nanoscale objects, direct application of forces, electric fields, and so on. The self-assembly approach is being pursued at many laboratories. Despite all the current activity, self-assembly has severe limitations because the structures produced tend to be highly symmetric, and the most versatile self-assembled systems are organic and therefore generally lack robustness. The second approach involves Nanomanipulation, and is being studied by a small number of researchers, who are focusing on techniques based on Scanning Probe Microscopy.

Definition

The Unified Modeling Language (UML) is a standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems, as well as for business modeling and other non-software systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing object oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects. Using the UML helps project teams communicate, explore potential designs, and validate the architectural design of the software.

Large enterprise applications - the ones that execute core business applications, and keep a company going - must be more than just a bunch of code modules. They must be structured in a way that enables scalability, security, and robust execution under stressful conditions, and their structure - frequently referred to as their architecture - must be defined clearly enough that maintenance programmers can (quickly!) find and fix a bug that shows up long after the original authors have moved on to other projects. That is, these programs must be designed to work perfectly in many areas, and business functionality is not the only one (although it certainly is the essential core). Of course a well-designed architecture benefits any program, and not just the largest ones as we've singled out here.

We mentioned large applications first because structure is a way of dealing with complexity, so the benefits of structure (and of modeling and design, as we'll demonstrate) compound as application size grows large. Another benefit of structure is that it enables code reuse: Design time is the easiest time to structure an application as a collection of self-contained modules or components. Eventually, enterprises build up a library of models of components, each one representing an implementation stored in a library of code modules.

Modeling
Modeling is the designing of software applications before coding. Modeling is an Essential Part of large software projects, and helpful to medium and even small projects as well. A model plays the analogous role in software development that blueprints and other plans (site maps, elevations, physical models) play in the building of a skyscraper. Using a model, those responsible for a software development project's success can assure themselves that business functionality is complete and correct, end-user needs are met, and program design supports requirements for scalability, robustness, security, extendibility, and other characteristics, before implementation in code renders changes difficult and expensive to make.

Surveys show that large software projects have a huge probability of failure - in fact, it's more likely that a large software application will fail to meet all of its requirements on time and on budget than that it will succeed. If you're running one of these projects, you need to do all you can to increase the odds for success, and modeling is the only way to visualize your design and check it against requirements before your crew starts to code.

Raising the Level of Abstraction:
Models help us by letting us work at a higher level of abstraction. A model may do this by hiding or masking details, bringing out the big picture, or by focusing on different aspects of the prototype. In UML 2.0, you can zoom out from a detailed view of an application to the environment where it executes, visualizing connections to other applications or, zoomed even further, to other sites. Alternatively, you can focus on different aspects of the application, such as the business process that it automates, or a business rules view. The new ability to nest model elements, added in UML 2.0, supports this concept directly.

Introduction

Over the last few years, the interest for connecting computers and computer supported devices to wireless networks has steadily increased. Computers are becoming more and more seamlessly integrated with everyday equipment and prices are dropping. At the same time wireless networking technologies, such as Bluetooth and IEEE 802.11b WLAN , are emerging. This gives rise to many new fascinating scenarios in areas such as health care, safety and security, transportation, and processing industry. Small devices such as sensors can be connected to an existing network infrastructure such as the global Internet, and monitored from anywhere.

The Internet technology has proven itself flexible enough to incorporate the changing network environments of the past few decades. While originally developed for low speed networks such as the ARPANET, the Internet technology today runs over a large spectrum of link technologies with vastly different characteristics in terms of bandwidth and bit error rate. It is highly advantageous to use the existing Internet technology in the wireless networks of tomorrow since a large amount of applications using the Internet technology have been developed. Also, the large connectivity of the global Internet is a strong incentive.

Since small devices such as sensors are often required to be physically small and inexpensive, an implementation of the Internet protocols will have to deal with having limited computing resources and memory. This report describes the design and implementation of a small TCP/IP stack called lwIP that is small enough to be used in minimal systems.

Overview

As in many other TCP/IP implementations, the layered protocol design has served as a guide for the design of the implementation of lwIP. Each protocol is implemented as its own module, with a few functions acting as entry points into each protocol. Even though the protocols are implemented separately, some layer violations are made, as discussed above, in order to improve performance both in terms of processing speed and memory usage. For example, when verifying the checksum of an incoming TCP segment and when demultiplexing a segment, the source and destination IP addresses of the segment has to be known by the TCP module. Instead of passing these addresses to TCP by the means of a function call, the TCP module is aware of the structure of the IP header, and can therefore extract this information by itself.

lwIP consists of several modules. Apart from the modules implementing the TCP/IP protocols (IP, ICMP, UDP, and TCP) a number of support modules are implemented.
The support modules consists of :-

" The operating system emulation layer (described in Chapter3)

" The buffer and memory management subsystems
(described in Chapter 4)

" Network interface functions (described in Chapter 5)

" Functions for computing Internet checksum (Chapter 6)

" An abstract API (described in Chapter 8 )

Introduction

In today's information age it is not difficult to collect data about an individual and use that information to exercise control over the individual. Individuals generally do not want others to have personal information about them unless they decide to reveal it. With the rapid development of technology, it is more difficult to maintain the levels of privacy citizens knew in the past. In this context, data security has become an inevitable feature. Conventional methods of identification based on possession of ID cards or exclusive knowledge like social security number or a password are not altogether reliable. ID cards can be almost lost, forged or misplaced: passwords can be forgotten.

Such that an unauthorized user may be able to break into an account with little effort. So it is need to ensure denial of access to classified data by unauthorized persons. Biometric technology has now become a viable alternative to traditional identification systems because of its tremendous accuracy and speed. Biometric system automatically verifies or recognizes the identity of a living person based on physiological or behavioral characteristics.

Since the persons to be identified should be physically present at the point of identification, biometric techniques gives high security for the sensitive information stored in mainframes or to avoid fraudulent use of ATMs. This paper explores the concept of Iris recognition which is one of the most popular biometric techniques. This technology finds applications in diverse fields.

Biometrics - Future Of Identity
Biometric dates back to ancient Egyptians who measured people to identify them. Biometric devices have three primary components.
1. Automated mechanism that scans and captures a digital or analog image of a living personal characteristic
2. Compression, processing, storage and comparison of image with a stored data.
3. Interfaces with application systems.

A biometric system can be divided into two stages: the enrolment module and the identification module. The enrolment module is responsible for training the system to identity a given person. During an enrolment stage, a biometric sensor scans the person's physiognomy to create a digital representation. A feature extractor processes the representation to generate a more compact and expressive representation called a template. For an iris image these include the various visible characteristics of the iris such as contraction, Furrows, pits, rings etc. The template for each user is stored in a biometric system database.

The identification module is responsible for recognizing the person. During the identification stage, the biometric sensor captures the characteristics of the person to be identified and converts it into the same digital format as the template. The resulting template is fed to the feature matcher, which compares it against the stored template to determine whether the two templates match.

The identification can be in the form of verification, authenticating a claimed identity or recognition, determining the identity of a person from a database of known persons. In a verification system, when the captured characteristic and the stored template of the claimed identity are the same, the system concludes that the claimed identity is correct. In a recognition system, when the captured characteristic and one of the stored templates are the same, the system identifies the person with matching template.

Introduction

While Internet technologies largely succeed in overcoming the barriers of time and distance, existing Internet technologies have yet to fully accommodate the increasing mobile computer usage. A promising technology used to eliminate this current barrier is Mobile IP. The emerging 3G mobile networks are set to make a huge difference to the international business community. 3G networks will provide sufficient bandwidth to run most of the business computer applications while still providing a reasonable user experience.

However, 3G networks are not based on only one standard, but a set of radio technology standards such as cdma2000, EDGE and WCDMA. It is easy to foresee that the mobile user from time to time also would like to connect to fixed broadband networks, wireless LANs and, mixtures of new technologies such as Bluetooth associated to e.g. cable TV and DSL access points.

In this light, a common macro mobility management framework is required in order to allow mobile users to roam between different access networks with little or no manual intervention. (Micro mobility issues such as radio specific mobility enhancements are supposed to be handled within the specific radio technology.) IETF has created the Mobile IP standard for this purpose.

Mobile IP is different compared to other efforts for doing mobility management in the sense that it is not tied to one specific access technology. In earlier mobile cellular standards, such as GSM, the radio resource and mobility management was integrated vertically into one system. The same is also true for mobile packet data standards such as CDPD, Cellular Digital Packet Data and the internal packet data mobility protocol (GTP/MAP) of GPRS/UMTS networks. This vertical mobility management property is also inherent for the increasingly popular 802.11 Wireless LAN standard.

Mobile IP can be seen as the least common mobility denominator - providing seamless macro mobility solutions among the diversity of accesses. Mobile IP is defining a Home Agent as an anchor point with which the mobile client always has a relationship, and a Foreign Agent, which acts as the local tunnel-endpoint at the access network where the mobile client is visiting. Depending on which network the mobile client is currently visiting; its point of attachment Foreign Agent) may change. At each point of attachment, Mobile IP either requires the availability of a standalone Foreign Agent or the usage of a Co-located care-of address in the mobile client itself.

The concept of "Mobility" or "packet data mobility", means different things depending on what context the word is used within. In a wireless or fixed environment, there are many different ways of implementing partial or full mobility and roaming services. The most common ways of implementing mobility (discrete mobility or IP roaming service) support in today's IP networking environments includes simple "PPP dial-up" as well as company internal mobility solutions implemented by means of renewal of IP address at each new point of attachment. The most commonly deployed way of supporting remote access users in today's Internet is to utilize the public telephone network (fixed or mobile) and to use the PPP dial-up functionality.

Definition

These notes provide an introduction to unsupervised neural networks, in particular Kohonen self-organizing maps; together with some fundamental background material on statistical pattern recognition.

One question which seems to puzzle many of those who encounter unsupervised learning for the first time is how can anything useful be achieved when input information is simply poured into a black box with no provision of any rules as to how this information should be stored, or examples of the various groups into which this information can be placed. If the information is sorted on the basis of how similar one input is with another, then we will have accomplished an important step in condensing the available information by developing a more compact representation.

We can represent this information, and any subsequent information, in a much reduced fashion. We will know which information is more likely. This black box will certainly have learned. It may permit us to perceive some order in what otherwise was a mass of unrelated information to see the wood for the trees.

In any learning system, we need to make full use of the all the available data and to impose any constrains that we feel are justified. If we know that what groups the information must fall into, that certain combinations of inputs preclude others, or that certain rules underlie the production of the information then we must use them. Often, we do not possess such additional information. Consider two examples of experiments. One designed to test a particular hypothesis, say, to determine the effects of alcohol on driving; the second to investigate any possible connection between car accidents and the driver's lifestyle.

In the first experiment, we could arrange a laboratory-based experiment where volunteers took measured amounts of alcohol and then attempted some motor-skill activity (e.g., following a moving light on a computer screen by moving the mouse). We could collect the data (i.e., amount of alcohol vs. error rate on the computer test), conduct the customary statistical test and, finally, draw our conclusions. Our hypothesis may that the more alcohol consumed the greater the error rate we can confirm this on the basis of this experiment. Note, that we cannot prove the relationship only state that we are 99% certain (or whatever level we set ourselves) that the result is not due purely to chance.

The second experiment is much more open-ended (indeed, it could be argued that it is not really an experiment).Data is collected from a large number of drives those that have been involved in accidents and those that have not. This data could include the driver's age, occupation, health details, drinking habits, etc. From this mass of information, we can attempt to discover any possible connections. A number of conventional statistical tools exist to support this (e.g., factor analysis). We may discover possible relationships including one between accidents and drinking but perhaps many others as well. There could be a number of leads that need following up. Both approaches are valid in searching for causes underlying road accidents. This second experiment can be considered as an example of unsupervised learning.

The next section provides some introductory background material on statistical pattern recognition. The terms and concepts will be useful in understanding the later material on unsupervised neural networks. As the approach underlying unsupervised networks is the measurement of how similar (or different) various inputs are, we need to consider how the distances between these inputs are measured. This forms the basis Section Three, together with a brief description of non-neural approaches to unsupervised learning. Section Four discusses the background to and basic algorithm of Kohonen self-organizing maps. The next section details some of the properties of these maps and introduces several useful practical points. The final section provides pointers to further information on unsupervised neural networks.

Definition

Survivability In Network Systems

Contemporary large-scale networked systems that are highly distributed improve the efficiency and effectiveness of organizations by permitting whole new levels of organizational integration. However, such integration is accompanied by elevated risks of intrusion and compromise. These risks can be mitigated by incorporating survivability capabilities into an organization's systems. As an emerging discipline, survivability builds on related fields of study (e.g., security, fault tolerance, safety, reliability, reuse, performance, verification, and testing) and introduces new concepts and principles. Survivability focuses on preserving essential services in unbounded environments, even when systems in such environments are penetrated and compromised.

The New Network Paradigm: Organizational Integration

From their modest beginnings some 20 years ago, computer networks have become a critical element of modern society. These networks not only have global reach, they also have impact on virtually every aspect of human endeavor. Network systems are principal enabling agents in business, industry, government, and defense. Major economic sectors, including defense, energy, transportation, telecommunications, manufacturing, financial services, health care, and education, all depend on a vast array of networks operating on local, national, and global scales. This pervasive societal dependency on networks magnifies the consequences of intrusions, accidents, and failures, and amplifies the critical importance of ensuring network survivability.

As organizations seek to improve efficiency and competitiveness, a new network paradigm is emerging. Networks are being used to achieve radical new levels of organizational integration. This integration obliterates traditional organizational boundaries and transforms local operations into components of comprehensive, network-resident business processes. For example, commercial organizations are integrating operations with business units, suppliers, and customers through large-scale networks that enhance communication and services.

These networks combine previously fragmented operations into coherent processes open to many organizational participants. This new paradigm represents a shift from bounded networks with central control to unbounded networks. Unbounded networks are characterized by distributed administrative control without central authority, limited visibility beyond the boundaries of local administration, and lack of complete information about the network. At the same time, organizational dependencies on networks are increasing and risks and consequences of intrusions and compromises are amplified.

The Definition of Survivability

We define survivability as the capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents. We use the term system in the broadest possible sense, including networks and large-scale systems of systems. The term mission refers to a set of very high-level (i.e., abstract) requirements or goals.

Missions are not limited to military settings since any successful organization or project must have a vision of its objectives whether expressed implicitly or as a formal mission statement. Judgments as to whether or not a mission has been successfully fulfilled are typically made in the context of external conditions that may affect the achievement of that mission. For example, assume that a financial system shuts down for 12 hours during a period of widespread power outages caused by a hurricane.

If the system preserves the integrity and confidentiality of its data and resumes its essential services after the period of environmental stress is over, the system can reasonably be judged to have fulfilled its mission. However, if the same system shuts down unexpectedly for 12 hours under normal conditions (or under relatively minor environmental stress) and deprives its users of essential financial services, the system can reasonably be judged to have failed its mission, even if data integrity and confidentiality are preserved.

Definition

The world of mobile computing has seldom been so exciting. Not, at least, for last 3 years when all that the chip giants could think of was scaling down the frequency and voltage of the desktop CPUs, and labeling them as mobile processors. Intel Centrino mobile technology is based on the understanding that mobile customers value the four vectors of mobility: performance, battery life, small form factor, and wireless connectivity. The technologies represented by the Intel Centrino brand will include an Intel Pentium-M processor, Intel 855 chipset family, and Intel PRO/Wireless 2100 network connection .

The Intel Pentium-M processor is a higher performance, lower power mobile processor with several micro-architectural enhancements over existing Intel mobile processors. Some key features of the Intel Pentium-M processor Micro-architecture include Dynamic Execution, 400-MHz, on-die 1-MB second level cache with Advanced Transfer Cache Architecture, Streaming SIMD Extensions 2, and Enhanced Intel SpeedStep technology.

The Intel Centrino mobile technology also includes the 855GM chipset components GMCH and the ICH4-M. The Accelerated Hub Architecture is designed into the chipset to provide an efficient, high bandwidth, communication channel between the GMCH and the ICH4-M.The GMCH component contains a processor system bus controller, a graphics controller, and a memory controller, while providing an LVDS interface and two DVO ports.

The integrated Wi-Fi Certified Intel PRO/Wireless 2100 Network Connection has been designed and validated to work with all of the Intel Centrino mobile technology components and is able to connect to 802.11b Wi-Fi certified access points. It also supports advanced wireless LAN security including Cisco LEAP, 802.1X and WEP. Finally, for comprehensive security support, the Intel PRO/Wireless 2100 Network Connection has been verified with leading VPN suppliers like Cisco, CheckPoint, Microsoft and Intel NetStructure.

Pentium-M Processor

The Intel Pentium-M processor is a high performance, low power mobile processor with several micro-architectural enhancements over existing Intel mobile processors. The following list provides some of the key features on this processor:

¢ Supports Intel Architecture with Dynamic Execution
¢ High performance, low-power core
¢ On-die, 1-MByte second level cache with Advanced Transfer Cache Architecture
¢ Advanced Branch Prediction and Data Prefetch Logic
¢ Streaming SIMD Extensions 2 (SSE2)
¢ 400-MHz, Source-Synchronous processor system bus
¢ Advanced Power Management features including Enhanced Intel SpeedStep technology
¢ Micro-FCPGA and Micro-FCBGA packaging technologies

The Intel Pentium-M processor is manufactured on Intel's advanced 0.13 micron process technology with copper interconnect. The processor maintains support for MMX technology and Internet Streaming SIMD instructions and full compatibility with IA-32 software. The high performance core features architectural innovations like Micro-op Fusion and Advanced Stack Management that reduce the number of micro-ops handled by the processor. This results in more efficient scheduling and better performance at lower power.

The on-die 32-kB Level 1 instruction and data caches and the 1-MB Level 2 cache with Advanced Transfer Cache Architecture enable significant performance improvement over existing mobile processors. The processor also features a very advanced branch prediction architecture that significantly reduces the number of mispredicted branches. The processor's Data Prefetch Logic speculatively fetches data to the L2 cache before an L1 cache requests occurs, resulting in reduced bus cycle penalties and improved performance.

Definition

When we talk about free software, we usually refer to the free software licenses. We also need relief from software patents, so our freedom is not restricted by them. But there is a third type of freedom we need, and that's user freedom.

Expert users don't take a system as it is. They like to change the configuration, and they want to run the software that works best for them. That includes window managers as well as your favourite text editor. But even on a GNU/Linux system consisting only of free software, you can not easily use the filesystem format, network protocol or binary format you want without special privileges. In traditional Unix systems, user freedom is severly restricted by the system administrator.

The Hurd is built on top of CMU's Mach 3.0 kernel and uses Mach's virtual memory management and message-passing facilities. The GNU C Library will provide the Unix system call interface, and will call the Hurd for needed services it can't provide itself. The design and implementation of the Hurd is being lead by Michael Bushnell, with assistance from Richard Stallman, Roland McGrath, Jan Brittenson, and others.

A More Usable Approach To OS Design

The fundamental purpose of an operating system (OS) is to enable a variety of programs to share a single computer efficiently and productively. This demands memory protection, preemptively scheduled timesharing, coordinated access to I/O peripherals, and other services. In addition, an OS can allow several users to share a computer. In this case, efficiency demands services that protect users from harming each other, enable them to share without prior arrangement, and mediate access to physical devices.
On today's computer systems, programmers usually implement these goals through a large program called the kernel. Since this program must be accessible to all user programs, it is the natural place to add functionality to the system. Since the only model for process interaction is that of specific, individual services provided by the kernel, no one creates other places to add functionality. As time goes by, more and more is added to the kernel.

A traditional system allows users to add components to a kernel only if they both understand most of it and have a privileged status within the system. Testing new components requires a much more painful edit-compile-debug cycle than testing other programs. It cannot be done while others are using the system. Bugs usually cause fatal system crashes, further disrupting others' use of the system. The entire kernel is usually non-pageable. (There are systems with pageable kernels, but deciding what can be paged is difficult and error prone. Usually the mechanisms are complex, making them difficult to use even when adding simple extensions.)

Because of these restrictions, functionality which properly belongs behind the wall of a traditional kernel is usually left out of systems unless it is absolutely mandatory. Many good ideas, best done with an open/read/write interface cannot be implemented because of the problems inherent in the monolithic nature of a traditional system. Further, even among those with the endurance to implement new ideas, only those who are privileged users of their computers can do so. The software copyright system darkens the mire by preventing unlicensed people from even reading the kernel source The Hurd removes these restrictions from the user. It provides an user extensible system framework without giving up POSIX compatibility and the unix security model.

When Richard Stallman founded the GNU project in 1983, he wanted to write an operating system consisting only of free software. Very soon, a lot of the essential tools were implemented, and released under the GPL. However, one critical piece was missing: The kernel. After considering several alternatives, it was decided not to write a new kernel from scratch, but to start with the Mach micro kernel.

Definition

The high-tech industry has spent decades creating computer systems with ever mounting degrees of complexity to solve a wide variety of business problems. Ironically, complexity itself has become part of the problem. As networks and distributed systems grow and change, they can become increasingly hampered by system deployment failures, hardware and software issues, not to mention human error. Such scenarios in turn require further human intervention to enhance the performance and capacity of IT components. This drives up the overall IT costs-even though technology component costs continue to decline. As a result, many IT professionals seek ways to improve their return on investment in their IT infrastructure, by reducing the total cost of ownership of their environments while improving the quality of service for users.

Self managing computing helps address the complexity issues by using technology to manage technology. The idea is not new many of the major players in the industry have developed and delivered products based on this concept. Self managing computing is also known as autonomic computing.

The term autonomic is derived from human biology. The autonomic nervous system monitors your heartbeat, checks your blood sugar level and keeps your body temperature close to 98.6°F, without any conscious effort on your part. In much the same way, self managing computing components anticipate computer system needs and resolve problems with minimal human intervention.

Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment. Rather than IT professionals initiating management activities, the system observes something about itself and acts accordingly. This allows the IT professional to focus on high-value tasks while the technology manages the more mundane operations. Self managing computing can result in a significant improvement in system management efficiency, when the disparate technologies that manage the environment work together to deliver performance results system wide.

However, complete autonomic systems do not yet exist. This is not a proprietary solution. It's a radical change in the way businesses, academia, and even the government design, develop, manage and maintain computer systems. Self managing computing calls for a whole new area of study and a whole new way of conducting business.
Self managing computing is the self-management of e-business infrastructure, balancing what is managed by the IT professional and what is managed by the system. It is the evolution of e-business.

What is self managing computing?
Self managing computing is about freeing IT professionals to focus on high-value tasks by making technology work smarter. This means letting computing systems and infrastructure take care of managing themselves. Ultimately, it is writing business policies and goals and letting the infrastructure configure, heal and optimize itself according to those policies while protecting itself from malicious activities. Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives.

Definition

Computers are as ubiquitous as automobiles and toasters, but exploiting their capabilities still seems to require the training of a supersonic test pilot. VCR displays blinking a constant 12 noon around the world testify to this conundrum. As interactive television, palmtop diaries and "smart" credit cards proliferate, the gap between millions of untrained users and an equal number of sophisticated microprocessors will become even more sharply apparent. With people spending a growing proportion of their lives in front of computer screens--informing and entertaining one another, exchanging correspondence, working, shopping and falling in love--some accommodation must be found between limited human attention spans and increasingly complex collections of software and data.

Computers currently respond only to what interface designers call direct manipulation. Nothing happens unless a person gives commands from a keyboard, mouse or touch screen. The computer is merely a passive entity waiting to execute specific, highly detailed instructions; it provides little help for complex tasks or for carrying out actions (such as searches for information) that may take an indefinite time.

If untrained consumers are to employ future computers and networks effectively, direct manipulation will have to give way to some form of delegation. Researchers and software companies have set high hopes on so called software agents, which "know" users' interests and can act autonomously on their behalf. Instead of exercising complete control (and taking responsibility for every move the computer makes), people will be engaged in a cooperative process in which both human and computer agents initiate communication, monitor events and perform tasks to meet a user's goals.

The average person will have many alter egos in effect, digital proxies-- operating simultaneously in different places. Some of these proxies will simply make the digital world less overwhelming by hiding technical details of tasks, guiding users through complex on-line spaces or even teaching them about certain subjects. Others will actively search for information their owners may be interested in or monitor specified topics for critical changes. Yet other agents may have the authority to perform transactions (such as on-line shopping) or to represent people in their absence. As the proliferation of paper and electronic pocket diaries has already foreshadowed, software agents will have a particularly helpful role to play as personal secretaries--extended memories that remind their bearers where they have put things, whom they have talked to, what tasks they have already accomplished and which remain to be finished.

Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent should also be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. Finally, it should be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal.

1.2 DEFINITION OF INTELLIGENT SOFTWARE AGENTS:

Intelligent Software Agents are a popular research object these days. Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are.Moreover these agents may have a wide range of applications which may significantly effect its definition,hence it is not easy to craft a rock-solid definition which could be generalized for all.However a informal definition of an Intelligent software agent may be given as:

"A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result."

 

Definition

Laptops are becoming as common as your cellular phone, and now they share the hardware industry as that of desktop computers with a number of configurable options. The features, the price, the build quality, the weight and dimensions, the display, battery uptime or that matter, the ease of the trackball. Earlier, there were hardly any configurable options available but today, we have a variety of laptops n different configurations with the process and just about anything you want.

Companies such as Intel, AMD, Transmeta and nViad, to name only a few, are making laptops a hype and reality. Intel and AMD have brought out technologies such as speed step to preserve battery power in laptops.
If you are on the move all the time, you probably need a laptop that can do all the things that you will not be able to do all the things that you will not only able you to create documents, spreadsheets and presentations, but also send and receive e-mail, access the web and may be even play music CDs or watch a DVD movie to get that much deserved break. You need laptop that is also study enough to take the bumps and joints in its stride while you are on the move.

If on other hand, you want a laptop for basic tasks and primarily for the mobility so that your work does not get held up on the occasions that you need to travel, then you would not necessarily need the best in terms of the choice and power of its individual sub systems. There fore, if the CD-ROM drive, floppy drive is not integrated into the main unit, but it supplied as an additional peripheral, the frequent traveler would not only mind, because the overall weight of the laptop would be significantly lesser and would be easier on your shoulder after a long day commuting.

History
Alan Kay of Xerox Palio Alto Research Center originated the idea of a portable computer in the 1970's. Kay envisioned a notebook sized portable computer called the Dynabook that everyone could own and that could handle all of the user's informational needs. Kay also envisioned the Dynabook with wireless network capabilities. Arguably, the first laptop computer was designed in 1979 by laptop computer was designed in 1979 by William Moggvidge of Gvid systems Corp. It had 340 kilo bytes of bubble memory, a die cast magnesium case and a folding electroluminescent graphics display a screen. In 1983 Gavilan Computer produced a laptop computer with the following features.

" 64 kilobytes (expandable to 128 kilobytes)
of Random Access Memory
" Gavilan operating system (also van MS-DOS)
" 8088 microprocessor
" Touchpad mouse
" Portable printer
" Weighed 9 lb(4kg) alone or 14 lb (6.4 kg) with printer

The Gavilan computer had a floppy drive that was not compatible with other computers and it primarily used its own operating system. The company failed. in 1984, Apple lle was a notebook sized computer but not a true laptop. It had a 65602 microprocessor, 128KB of memory, an internal 5.25 inch floppy drive two serial ports, a mouse port, modem card, external power supply and a soldering handle.

Definition

For some time now, both small and large companies have been building robust applications for personal computers that continue to be ever more powerful and available at increasingly lower costs. While these applications are being used by millions of users each day, new forces are having a profound effect on the way software developers build applications today and the platform in which they develop and deploy their application.

The increased presence of Internet technologies is enabling global sharing of information-not only from small and large businesses, but individuals as well. The Internet has sparked a new creativity in many, resulting in many new businesses popping up overnight, running 24 hours a day, seven days a week. Competition and the increased pace of change are putting ever-increasing demands for an application platform that enables application developers to build and rapidly deploy highly adaptive applications in order to gain strategic advantage.

It is possible to think of these new Internet applications needing to handle literally millions of users-a scale difficult to imagine a just a few short years ago. As a result, applications need to deal with user volumes of this scale, reliable to operate 24 hours a day and flexible to meet changing business needs. The application platform that underlies these types of applications must also provide a coherent application model along with a set of infrastructure and prebuilt services for enabling development and management of these new applications.

Introducing Windows DNA: Framework for a New Generation of Computing Solutions

Today, the convergence of Internet and Windows computing technologies promises exciting new opportunities for savvy businesses: to create a new generation of computing solutions that dramatically improve the responsiveness of the organization, to more effectively use the Internet and the Web to reach customers directly, and to better connect people to information any time or any place. When a technology system delivers these results, it is called a Digital Nervous System. A Digital Nervous System relies on connected PCs and integrated software to make the flow of information rapid and accurate. It helps everyone act faster and make more informed decisions. It prepares companies to react to unplanned events. It allows people focus on business, not technology.

Creating a true Digital Nervous System takes commitment, time, and imagination. It is not something every company will have the determination to do. But those who do will have a distinct advantage over those who don't. In creating a Digital Nervous System, organizations face many challenges: How can they take advantage of new Internet technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and flexible to change? How can they lower the overall cost of computing while making complex computing environments work?

Introduction

Amorphous computing consists of a multitude of interacting computers with modest computing power and memory, and modules for intercommunication. These collections of devices are known as swarms. The desired coherent global behaviour of the computer is achieved from the local interactions between the individual agents. The global behaviour of these vast numbers of unreliable agents is resilient to a small fraction of misbehaving agents and noisy and intimidating environment. This makes them highly useful for sensor networks, MEMS, internet nodes, etc. Presently, of the 8 billion computational units existing worldwide, only 2% of them are stand-alone computers. This proportion is projected to further decrease with the paradigm shift to the biologically inspired amorphous computing model. An insight into amorphous and swarm computing will be given in this paper.

The ideas for amorphous computing have been derived from swarm behaviour of social organisms like the ants, bees and bacteria. Recently, biologists and computer scientists studying artificial life have modelled biological swarms to understand how such social animals interact, achieve goals and evolve. A certain level of intelligence,exceeding those of the individual agents, results from the swarm behaviour. Amorphous Computing is a established with a collection of computing particles -with modest memory and computing power- spread out over a geographical space and running identical programs. Swarm Intelligence may be derived from the randomness, repulsion and unpredictability of the agents, thereby resulting in diverse solutions to the problem. There are no known criteria to evaluate swarm intelligence performance.

Inspiration


The development of swarm computing has been instilled by some of the natural phenomenon.
The most complex of the activities, like optimal path finding, have been executed by simple organisms. Lately MEMS research has paved the way for manufacturing the swarm agents with low costs and high efficiency.

The biological world


In case of the ant colonies, the worker ants have decentralised control and a robust mechanism for some of the complex activities like foraging, finding the shortest path to food source and back home, build and protect nests and finding the richest food source in the locality. The ants communicate by using pheromones. Trails of pheromone are laid down by a given ant, which can be followed by other ants. Depending on the species, ants lay trails travelling from the nest, to the nest or possibly in both directions. Pheromones evaporate over time. Pheromones also accumulate with multiple ants using the same path. As the ants forage, the optimal path to food is likely to have the highest deposition of pheromones, as more number of ants follow this path and deposit pheromones. The longer paths are less likely to be travelled and therefore have only a smaller concentration of pheromones. With time, most of the ants follow the optimal path. When the food sources deplete, the pheromones evaporate and new trails can be discovered. This optimal path finding approach has a highly dynamic and robust nature.


Similar organization and behaviour are also present in the flocks of bird. For a bird to participate in a flock, it only adjusts its movements to coordinate with the movements of its flock mates, typically its neighbours that are close to it in the flock. A bird in a flock simply tries to stay close to its neighbours, but avoid collisions with them. Each bird does not take commands from any leader bird since there is no lead bird. Any bird can °y in the front, center and back of the swarm. Swarm behaviour helps birds take advantage of several things including protection from predators (especially for birds in the middle of the flock), and searching for food (essentially each bird is exploiting the eyes of every other bird). Even complex biological entities like brain are a swarm of interacting simple agents like the neurons. Each neuron does not have the holistic picture, but processes simple elements through its interaction with few other neurons and paves way for the thinking process.

Introduction


The seminar aims at introducing various other forms of computation methods. Concepts of quantum computing ,DNA computing have been introduced and discussed . Particular algorithms (like the Shor's algorithm) have been discussed. Solution of Traveling alesman problem using DNA computing has also been discussed . In ¯ne,the seminar aims opening windows to topics that may become tomorrow's mainstay in computer science.

Richard Feynman thought up the idea of a 'quantum computer', a computer that uses the e®ects of quantum mechanics to its advantage .Initially, the idea of a 'quantum computer' was primarily of theoretical interest only, but recent developments have bought the idea to foreground. To start with, was the invention of an algorithm to factor large numbers on a quantum computer, by Peter Shor ,from Bell labs . By using this algorithm, a quantum computer would be able to crack codes much more quickly than any ordinary (or classical) computer could.
In fact a quantum computer capable of performing Shor's algorithm would be able to break current cryptography techniques(like the RSA) in a matter of seconds. With the motivation provided by this algorithm, the quantum computing has gathered momentum and is a hot topic for research around the globe. Leonard M. Adleman solved an unremarkable computational problem with an exceptional technique. He had used 'mapping' to solve TSP. It was a problem that an average desktop machine could solve in fraction of a second. Adleman, however took , seven days to find a solution. Even then his work was exceptional, because he solved the problem with DNA. It was a breakthrough
and a landmark demonstration of computing on the molecular level.


In case of quantum computing and DNA computing ,both have two aspects.Firstly building a computer and secondly deploying the computer for solving problems that are tough to solve in the present domain of Von Neumann architecture. In the seminar we would consider the later.

Shor's Algorithm


Shor's algorithm is based on a result from number theory. Which states : The function
f(a) = x pow a mod n
is a periodic function, where x and n are coprime . In the context of Shor's algorithm n is the number we wish to factor. By saying we mean that their greatest common divisor is one.
If implemented, it will have a profound e®ect on cryptography, as it would compromise the security provided by public key encryption (such as RSA).We all know that the security lies in the 'hard' factoring problem. Shor's algorithm makes it simple using quantum computing techniques.

Introduction


Web application designing has by far evolved in a number of ways since the time of its birth. To make web pages more interactive various techniques have been devised both at the browser level and at the server level. The introduction of XMLHttpRequest class in the Internet Explorer 5 by Microsoft paved the way for interacting with the server using JavaScript, asynchronously. AJAX, a shorthand for Asynchronous Java And XML, is a technique which uses this MLHttpRequest object of the browser features plus the Document Object Model and DHTML and provides for making highly interactive web applications in which the entire web page need not be changed by a user action, only parts of the page are loaded dynamically by exchanging information with the server. This approach has been able to enhance the interactivity and speed of the web applications to a great extent. Interactive applications such as Google Maps, Orkut, Instant Messengers are making extensive use of this technique. This report presents an overview of the basic concepts of AJAX and how it is used in making web applications.

Creating Web applications has been considered as one of the most exciting jobs under current interaction design. But, Web interaction designers can't help feel a little envious of their colleagues who create desktop software. Desktop applications have a richness and responsiveness that has seemed out of reach on the Web. The same simplicity that enabled the Web's rapid proliferation also creates a gap between the experiences that can be provided through web applications and the experiences users can get from a desktop application.
In the earliest days of the Web, designers chafed against the constraints of the medium. The entire interaction model of the Web was rooted in its heritage as a hypertext system: click the link, request the document, wait for the server to respond. Designers could not think of changing the basic foundation of the web that is, the call-response model, to improve on the web applications because of the various caveats, restrictions and compatibility issues associated with it.
But the urge to enhance the responsiveness of the web applications, made the designers take up the task of making the Web work the best it could within the hypertext interaction model, developing new conventions for Web interaction that allowed their applications to reach audiences who never would have attempted to use desktop applications designed for the same tasks. The designers' came up with a technique called AJAX, shorthand for Asynchronous Java And XML, which is a web development technique for creating interactive web applications. The intent of this is to make web pages feel more responsive by exchanging small amounts of data with the server behind the scenes, so that the entire web page does not have to be reloaded each time the user makes a change. This is meant to increase the web page's interactivity, speed, and usability. AJAX is not a single new technology of its own but is a bunch of several technologies, each ourishing in its own right, coming together in powerful new ways.

What is AJAX?


AJAX is a set of technologies combined in an efficient manner so that the web application runs in a better way utilizing the benefits of all these simultaneously. AJAX incorporates:
1. standards-based presentation using XHTML and CSS;
2. dynamic display and interaction using the Document Object Model;
3. data interchange and manipulation using XML and XSLT;
4. asynchronous data retrieval using XMLHttpRequest;
5. and JavaScript binding everything together.

Custom Search

visitors in world