Technology

present software technlogies

Definition



The PIVOT VECTOR SPACE APPROACH is a novel technique of audio-video mixing which automatically selects the best audio clip from the available database, to be mixed with the given video shot. Till the development of this technique, audio-video mixing is a process that could be done only by professional audio-mixing artists. However employing these artists is very expensive and is not feasible for home video mixing. Besides, the process is time-consuming and tedious.


In today's era, significant advances are happening constantly in the field of Information Technology. The development in the IT related fields such as multimedia is extremely vast. This is evident with the release of a variety of multimedia products such as mobile handsets, portable MP3 players, digital video camcorders, handicams etc. Hence, certain activities such as production of home videos is easy due to products such as handicams, digital video camcorders etc. Such a scenario was not there a decade ago ,since no such products were available in the market. As a result production of home videos is not possible since it was reserved completely for professional video artists.


So in today's world, a large amount of home videos are being made and the number of amateur and home video enthusiasts is very large.A home video artist can never match the aesthetic capabilities of a professional audio mixing artist. However employing a professional mixing artist to develop home video is not feasible as it is expensive, tedious and time consuming.


The PIVOT VECTOR SPACE APPROACH is a technique that all amateur and home video enthusiasts can use in the creation of video footage that gives a professional look and feel. This technique saves cost and is fast. Since it is fully automatic, the user need not worry about his aesthetic capabilities. The PIVOT VECTOR SPACE APPROACH uses a pivot vector space mixing framework to incorporate the artistic heuristics for mixing audio with video .These artistic heuristics use high level perceptual descriptors of audio and video characteristics. Low-level signal processing techniques compute these descriptors.


Video Aesthetic Features


The table shows, from the cinematic point of view,a set of attributed features(such as color and motion) required to describe videos.The computations for extracting aesthetic attributed features from low-level video features occur at the video shot granularity. Because some attributed features are based on still images(such as high light falloff),we compute them on the key frame of a video shot. We try to optimize the trade-off in accuracy and computational efficiency among the competing extraction methods. Also, even though we assume that the videos considered come in the MPEG format(widely used by several home video camcorders),the features exist independently of a particular representation format.

Definition

The term On-Line Analytical Processing (OLAP) was coined by E.F. Codd in 1993 to refer to a type of application that allows a user to interactively analyze data. An OLAP system is often contrasted to an OLTP (On-Line Transaction Processing) system that focuses on processing transaction such as orders, invoices or general ledger transactions. Before the term OLAP was, coined, these systems were often referred to as Decision Support Systems.
OLAP is now acknowledged as a key technology for successful management in the 90's.It describes a class of applications that require multidimensional analysis of business data.

OLAP systems enable managers and analysts to rapidly and easily examine key performance data and perform powerful comparison and trend analyses, even on very large data volumes. They can be used in a wide variety of business areas, including sales and marketing analysis, financial reporting, quality tracking, profitability analysis, manpower and pricing applications, and many others.

OLAP technology is being used in an increasingly wide range of applications. The most common are sales and marketing analysis; financial reporting and consolidation; and budgeting and planning. Increasingly, however OLAP is being used for applications such as product profitability and pricing analysis; activity based costing, manpower planning; quality analysis, in fact for any management system that requires a flexible, top down view of an organization.
Online Analytical Processing (OLAP) is a method of analyzing data in a multidimensional format, often across multiple time periods, with the aim of uncovering the business information concealed within the data'- OLAP enables business users to gain an insight into the business through interactive analysis of different views of the business data that have been built up from the operational systems. This approach facilitates a more intuitive and meaningful analysis of business information and assists in identifying important business trends.

OLAP is often confused with Data Warehousing. OLAP is not a data warehousing, methodology, however it is an integral part of a data warehousing solution. OLAP comes in many different shades, depending on the underlying database structure and the location of the majority of the analytical processing. Thus, the term OLAP has different meanings depending on the specific combination of these variables. This white paper examines the different options to support OLAP. It examines the strengths and weaknesses of each and recommends the analytical tasks for which each is most suitable.

OLAP provides the facility to analyze I the data held within the data warehouse in a flexible manner. It is an integral component of a successful data warehouse solution; it is not in itself a data warehousing methodology or system. However, the term OLAP has different meanings for different people, as there are many variants of OLAP. This article attempts to put the different OLAP scenarios into context.

OLAP can be defined as the process of converting raw data into business information through multi-dimensional analysis. This enables analysts to identify business strengths and weaknesses, business trends and the underlying causes of these trends. It provides an insight into the business through the interactive analysis of different views of business information that have been built up from raw operating data which reflect the business users understanding of the business.

Welcome to the computing faculty. Gorokan boasts a state of the art learning facility 'Skill Centre' funded by the federal background-information-technology-images-0government. This learning space consists of 30 computers placed in a professional modular furniture framework. It also has a second learning space using 15 laptops, a computer workroom, server room and kitchen.

The skill centre is well equiped with two data projectors and an electronic whiteboard  smartboard. We also have sets of lego Robotics to compliment our teaching program.

The core software used includes: Microsoft Office 2007, Adobe CS4 suite of programs including: Dreamweaver, Flash, IIlustrator, Fireworks, InDesign, Premier Elements.

Best-in-class service is CMR's cornerstone. Our dedication to providing outstanding customer care also drives every aspect of our technology's design and development.

CMR.ez is CMR's sophisticated suite of proprietary web-based housing and registration solutions. Powerful. Secure. Feature-rich. Dynamically managing your housing and registration as one integrated process, the CMR.ez system offers the full range of online services and functionality that today's convention planners and delegates expect.

As the needs of the marketplace evolve, so does CMR.ez technology. CMR's ongoing system advancements ensure that innovation never ceases. The end result is a steady release of new and exciting improvements that continue to boost CMR.ez's power and performance.

CMR.ez's core business logic is built to handle any size group or citywide housing program, while being fully customizable to meet the exacting requirements of your program.

INTRODUCTION

The Universal Serial Bus (USB), with one billion units in the installed base, is the most successful interface in PC history. Projections are for 3.5 billion interfaces shipped by 2006. Benefiting from exceptionally strong industry support from all market segments, USB continues to evolve as new technologies and products come to market. It is already the de facto interconnect for PCs, and has proliferated into consumer electronics (CE) and mobile devices as well.

The Wireless USB is the first the high speed Personal Wireless Interconnect. Wireless USB will build on the success of wired USB, bringing USB technology into the wireless future. Usage will be targeted at PCs and PC peripherals, consumer electronics and mobile devices. To maintain the same usage and architecture as wired USB, the Wireless USB specification is being defined as a high-speed host-to-device connection. This will enable an easy migration path for today's wired USB solutions.

This paper takes a brief look at the widely used interconnect standard, USB and in particular, at the emerging technology of Wireless USB and its requirements and promises.

USB Ports

Just about any computer that you buy today comes with one or more Universal Serial Bus connectors on the back. These USB connectors let you attach everything from mice to printers to your computer quickly and easily. The operating system supports USB as well, so the installation of the device drivers is quick and easy, too. Compared to other ways of connecting devices to your computer (including parallel ports, serial ports and special cards that you install inside the computer's case), USB devices are incredibly simple!

Anyone who has been around computers for more than two or three years knows the problem that the Universal Serial Bus is trying to solve -- in the past, connecting devices to computers has been a real headache!
" Printers connected to parallel printer ports, and most computers only came with one. Things like Zip drives, which need a high-speed connection into the computer, would use the parallel port as well, often with limited success and not much speed.
" Modems used the serial port, but so did some printers and a variety of odd things like Palm Pilots and digital cameras. Most computers have at most two serial ports, and they are very slow in most cases.
" Devices that needed faster connections came with their own cards, which had to fit in a card slot inside the computer's case. Unfortunately, the number of card slots is limited and you needed a Ph.D. to install the software for some of the cards.
The goal of USB is to end all of these headaches. The Universal Serial Bus gives you a single, standardized, easy-to-use way to connect up to 127 devices to a computer.
Just about every peripheral made now comes in a USB version. In fact almost all the devices manufactured today are designed to be interfaced to the computer via the USB ports.
USB Connections
Connecting a USB device to a computer is simple -- you find the USB connector on the back of your machine and plug the USB connector into it. If it is a new device, the operating system auto-detects it and asks for the driver disk. If the device has already been installed, the computer activates it and starts talking to it. USB devices can be connected and disconnected at any time.

USB Features
The Universal Serial Bus has the following features:
" The computer acts as the host.
" Up to 127 devices can connect to the host, either directly or by way of USB hubs.
" Individual USB cables can run as long as 5 meters; with hubs, devices can be up to 30 meters (six cables' worth) away from the host.
" With USB 2.,the bus has a maximum data rate of 480 megabits per second.
" A USB cable has two wires for power (+5 volts and ground) and a twisted pair of wires to carry the data.
" On the power wires, the computer can supply up to 500 milliamps of power at 5 volts.
" Low-power devices (such as mice) can draw their power directly from the bus. High-power devices (such as printers) have their own power supplies and draw minimal power from the bus. Hubs can have their own power supplies to provide power to devices connected to the hub.
" USB devices are hot-swappable, meaning you can plug them into the bus and unplug them any time.
" Many USB devices can be put to sleep by the host computer when the computer enters a power-saving mode

INTRODUCTION

The origins of VoiceXML began in 1995 as an XML-based dialog design language intended to simplify the speech recognition application development process within an AT&T project called Phone Markup Language (PML). As AT&T reorganized, teams at AT&T, Lucent and Motorola continued working on their own PML-like languages.

In 1998, W3C hosted a conference on voice browsers. By this time, AT&T and Lucent had different variants of their original PML, while Motorola had developed VoxML, and IBM was developing its own SpeechML. Many other attendees at the conference were also developing similar languages for dialog design; for example, such as HP's TalkML and PipeBeach's VoiceHTML.

The VoiceXML Forum was then formed by AT&T, IBM, Lucent, and Motorola to pool their efforts. The mission of the VoiceXML Forum was to define a standard dialog design language that developers could use to build conversational applications. They chose XML as the basis for this effort because it was clear to them that this was the direction technology was going.

In 2000, the VoiceXML Forum released VoiceXML 1.0 to the public. Shortly thereafter, VoiceXML 1.0 was submitted to the W3C as the basis for the creation of a new international standard. VoiceXML 2.0 is the result of this work based on input from W3C Member companies, other W3C Working Groups, and the public.


VoiceXML is designed for creating audio dialogs that feature synthesized speech, digitized audio, recognition of spoken and DTMF key input, recording of spoken input, telephony, and mixed initiative conversations. Its major goal is to bring the advantages of Web-based development and content delivery to interactive voice response applications.

Here are two short examples of VoiceXML. The first is the venerable "Hello World":

<?xml version="1.0" encoding="UTF-8"?>
<vxml xmlns="http://www.w3.org/2001/vxml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/vxml
http://www.w3.org/TR/voicexml20/vxml.xsd"
version="2.0">
<form>
<block>Hello World!</block>
</form>
</vxml>

The top-level element is <vxml>, which is mainly a container for dialogs. There are two types of dialogs: forms and menus. Forms present information and gather input; menus offer choices of what to do next. This example has a single form, which contains a block that synthesizes and presents "Hello World!" to the user. Since the form does not specify a successor dialog, the conversation ends.

Our second example asks the user for a choice of drink and then submits it to a server script:
<?xml version="1.0" encoding="UTF-8"?>
<vxml xmlns="http://www.w3.org/2001/vxml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/vxml
http://www.w3.org/TR/voicexml20/vxml.xsd"
version="2.0">
<form>
<field name="drink">
<prompt>Would you like coffee, tea, milk, or nothing?</prompt>
<grammar src="drink.grxml" type="application/srgs+xml"/>
</field>
<block>
<submit next="http://www.drink.example.com/drink2.asp"/>
</block>
</form>
</vxml>

A field is an input field. The user must provide a value for the field before proceeding to the next element in the form. A sample interaction is:

C (computer): Would you like coffee, tea, milk, or nothing?
H (human): Orange juice.
C: I did not understand what you said. (a platform-specific default message.)
C: Would you like coffee, tea, milk, or nothing?
H: Tea
C: (continues in document drink2.asp)

INTRODUCTION

Today, the Internet and web-technologies have become commonplace and are rapidly decreasing in cost to such an extent that technologies such as Application Servers and Enterprise Portals are fast becoming products and commodities of tomorrow. Web enabling any information access and interaction - from Enterprise Portals, Education to E-Governance or for Healthcare, and every thing that one can imagine - is fast becoming all pervasive to such an extent that almost everyone one the world is affected by it, or contributing something in it. However, much of it does not create significasnt value until and unless, the architecture and services over it are suited to the needs of the organizational and institutional framework of the relevant domains. While IT is getting perfected, there is inadequate work in perfecting the large and complex distributed information systems, associated information sciences and the much needed institutional changes in every organization to effectively benefit from these developments.

The Kerala Education Grid is a project specifically addressed to the Higher Education sector of the state to set in place effective IT infrastructure and methodologies and thereby improve quality and standards of learning imparted in all the colleges.

Under the aegis of its Department of Higher Education, the State Government of Kerala has taken a major initiative in establishing an Education Grid across all colleges, universities and premier institutions of research and development (R&D). The Project is called Education Grid for two important reasons: Firstly, it aims to equip the colleges with necessary IT infrastructure, network them among themselves and with premier institutions of R&D. The second and more important reason is that the online assisted programmes planned to be put in place over the Education Grid enable the knowledge base, and associated benefits of experience and expertise to flow from where it is available - the better institutions and organizations to where it is needed - the teachers and students in the numerous colleges. The issues of how exactly this project helps, and how it is to be made a part of regular college or university is explained next.

The articulated vision of this project is to provide "Quality Education to all students irrespective of which college they are studying, or, where it is located". Having set this objective, one needs to probe in some depth the key factors that really ail our college education today. Firstly, formal education is conducted in a mechanical way of syllabus - classrooms - lectures - practical - examinations, with little enthusiastic involvement by teachers, or, education administrators alike. Students attend the classes and taking examinations with an aim of getting some marks or grade and a degree. In this process, the primary aim that education should impart scholarship, learning, earning for leaning and capacity for self-learning hardly get the attention they deserve in the formal education system.

In this context one may quote Alvin Toffler, "The illiterates of tomorrow are not those who can not read and write, but those who can not learn, unlearn and relearn". This brings the key question of what exactly are the attributes of knowledge, scholarship and learning that we wish to impart through our educational institutions. The key to India becoming a successful knowledge society lies in the rejuvenation of our formal higher education system. Education Grid approach appears to be the most practical, cost-effective, and perhaps the enlightened and realistic way to achieve this.

With the completion of the State Information Infrastructure, and with the implementation of projects such as `Kerala Education Grid' and `Education Server,' schools and colleges can start offering quality e-resources to students, irrespective of the geographic location of students and teachers.

Piggybacking on virtual campuses, school and college education in Kerala is poised to fashion citizens for the knowledge society of tomorrow. With the completion of the State Information Infrastructure, and with the implementation of projects such as `Kerala Education Grid' and ` Education Server', schools and colleges can start offering quality e-resources to students, irrespective of the geographic location of students and teachers.

INTRODUCTION

Searching on the Internet today can be compared to dragging a net across the surface of the ocean. While a great deal may be caught in the net, there is still a wealth of information that is deep, and therefore, missed. The reason is simple: Most of the Web's information on dynamically generated sites, and standard search engines never find it.

Traditional search engines create their indices by spidering or crawling surface Web pages. To be discovered, the page must be static and linked to other pages. Traditional search engines can not "see" or retrieve content in the deep Web - those pages do not exist until they are created dynamically as the result of a specific search. Because traditional search engine crawlers can not probe beneath the surface, the deep Web has heretofore been hidden.

Deep web is the name given to the technology of surfacing the hidden value that cannot be easily detected by other search engines. The deep web is the content that cannot be indexed and searched by search engines. For this reason the deep web is also called invisible web.

WHAT IS DEEP WEB?

The Deep Web is the content that resides in searchable databases, the results from which can only be discovered by a direct query. Without the directed query, the database does not publish the result. When queried, Deep Web sites post their results as dynamic Web pages in real-time. Though these dynamic pages have a unique URL address that allows them to be retrieved again later, they are not persistent.

The invisible web consists of files, images and web sites that, for a variety of reasons, cannot be indexed by popular search engines. The deep web is qualitatively different from the surface web. Deep web sources store their content in searchable databases that only produce results dynamically in response to a direct request. But a direct query is a "one at a time" laborious way to search. Deep web's search technology automates the process of making dozens of direct queries simultaneously using multiple-thread technology.

The Deep Web is made up of hundreds of thousands of publicly accessible databases and is approximately 500 times bigger than the surface Web.

IMPORTANCE OF DEEP WEB

" Public information on the deep Web is currently 400 to 550 times larger than the commonly defined World Wide Web.
" The deep Web contains 7,500 terabytes of information compared to nineteen terabytes of information in the surface Web.
" The deep Web contains nearly 550 billion individual documents compared to the one billion of the surface Web.
" More than 200,000 deep Web sites presently exist.
" Sixty of the largest deep-Web sites collectively contain about 750 terabytes of information -- sufficient by themselves to exceed the size of the surface Web forty times.
" On average, deep Web sites receive fifty per cent greater monthly traffic than surface sites and are more highly linked to than surface sites; however, the typical (median) deep Web site is not well known to the Internet-searching public.
" The deep Web is the largest growing category of new information on the Internet.
" Deep Web sites tend to be narrower, with deeper content, than conventional surface sites.
" Total quality content of the deep Web is 1,000 to 2,000 times greater than that of the surface Web

Definition

SUPER COMPUTERS - OVERVIEW
Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for 5 years (1985-1990). Cray, himself, never used the word "supercomputer," a little-remembered fact in that he only recognized the word "computer." In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although Cray Inc. still specializes in building supercomputers.
The Cray-2 was the world's fastest computer from 1985 to 1989.

Supercomputer Challenges & Technologies
" A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
" Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his famous Cray range of computers.
" Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

Technologies developed for supercomputers include:
" Vector processing
" Liquid cooling
" Non-Uniform Memory Access (NUMA)
" Striped disks (the first instance of what was later called RAID)
" Parallel filesystems

Platform
SD2000 uses PARAM 10000. It used up to 4 UltraSPARC-II processors. The PARAM systems can be extended to a cluster supercomputer. A clustered system with 1200 processors can deliver a peak performance of up to 1TFlops/s. Even though PARAM 10000 system is not ranked within top 500 supercomputers, it has a possibility of gaining a high rank. It uses a variation of MPI developed in CDAC. No performance data is available, although one would presume that it will not be very different from that of other UltraSPARC-II based systems using MPI. Because SD2000 is a commercial product, it is impossible to gather detailed data about algorithm and performance of the product.

Definition
FireWire, originally developed by Apple Computer, Inc is a cross platform implementation of the high speed serial data bus -define by the IEEE 1394-1995 [FireWire 400],IEEE 1394a-2000 [FireWire 800] and IEEE 1394b standards-that move large amounts of data between computers and peripheral devices. Its features simplified cabling, hot swapping and transfer speeds of up to 800 megabits per second. FireWire is a high-speed serial input/output (I/O) technology for connecting peripheral devices to a computer or to each other. It is one of the fastest peripheral standards ever developed and now, at 800 megabits per second (Mbps), its even faster .

Based on Apple-developed technology, FireWire was adopted in 1995 as an official industry standard (IEEE 1394) for cross-platform peripheral connectivity. By providing a high-bandwidth, easy-to-use I/O technology, FireWire inspired a new generation of consumer electronics devices from many companies, including Canon, Epson, HP, Iomega, JVC, LaCie, Maxtor, Mitsubishi, Matsushita (Panasonic), Pioneer, Samsung, Sony and FireWire has also been a boon to professional users because of the high-speed connectivity it has brought to audio and video production systems.

In 2001, the Academy of Television Arts & Sciences presented Apple with an Emmy award in recognition of the contributions made by FireWire to the television industry. Now FireWire 800, the next generation of FireWire technology, promises to spur the development of more innovative high-performance devices and applications. This technology brief describes the advantages of FireWire 800 and some of the applications for which it is ideally suited.

TOPOLOGY
The 1394 protocol is a peer-to-peer network with a point-to-point signaling environment. Nodes on the bus may have several ports on them. Each of these ports acts as a repeater, retransmitting any packets received by other ports within the node. Figure 1 shows what a typical consumer may have attached to their 1394 bus. Because 1394 is a peer-to-peer protocol, a specific host isn't required, such as the PC in USB. In Figure 1, the digital camera could easily stream data to both the digital VCR and the DVD-RAM without any assistance from other devices on the bus
FireWire uses 64-bit fixed addressing, based on the IEEE 1212 standard. There are three parts to each packet of information sent by a device over FireWire:

" A 10-bit bus ID that is used to determine which FireWire bus the data came from
" A 6-bit physical ID that identifies which device on the bus sent the data
" A 48-bit storage area that is capable of addressing 256 terabytes of information for each node!

The bus ID and physical ID together comprise the 16-bit node ID, which allows for 64,000 nodes on a system. Individual FireWire cables can run as long as 4.5 meters. Data can be sent through up to 16 hops for a total maximum distance of 72 meters. Hops occur when devices are daisy-chained together. Look at the example below. The camcorder is connected to the external hard drive connected to Computer A. Computer A is connected to Computer B, which in turn is connected to Computer C. It takes four hops for Computer C to access camera.
The 1394 protocol supports both asynchronous and isochronous data transfers.

Isochronous transfers: Isochronous transfers are always broadcast in a one-to-one or one-to-many fashion. No error correction or retransmission is available for isochronous transfers. Up to 80% of the available bus bandwidth can be used for isochronous transfers.
Asynchronous transfers: Asynchronous transfers are targeted to a specific node with an explicit address. They are not guaranteed a specific amount of bandwidth on the bus, but they are guaranteed a fair shot at gaining access to the bus when asynchronous transfers are permitted. This allows error-checking and retransmission mechanisms to take place.

Definition
Humans are very good at recognizing faces and if computers complex patterns. Even a passage of time doesn't affect this capability and therefore it would help become as robust as humans in face recognition. Machine recognition of human faces from still or video images has attracted a great deal of attention in the psychology, image processing, pattern recognition, neural science, computer security, and computer vision communities. Face recognition is probably one of the most non-intrusive and user-friendly biometric authentication methods currently available; a screensaver equipped with face recognition technology can automatically unlock the screen whenever the authorized user approaches the computer.

Face is an important part of who we are and how people identify us. It is arguably a person's most unique physical characteristic. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up.
Visionics, a company based in New Jersey, is one of many developers of facial recognition technology. The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that face from the rest of the scene and compare it to a database full of stored images. In order for this software to work, it has to know what a basic face looks like. Facial recognition software is designed to pinpoint a face and measure its features. Each face has certain distinguishable landmarks, which make up the different facial features. These landmarks are referred to as nodal points. There are about 80 nodal points on a human face. Here are a few of the nodal points that are measured by the software:

Distance between eyes
" Width of nose
" Depth of eye sockets
" Cheekbones
" Jaw line
" Chin

These nodal points are measured to create a numerical code, a string of numbers that represents the face in a database. This code is called a faceprint. Only 14 to 22 nodal points are needed for the FaceIt software to complete the recognition process.

Software
Facial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others. Besides facial recognition, biometric authentication methods also include:

" Fingerprint scan
" Retina scan
" Voice identification

Facial recognition methods generally involve a series of steps that serve to capture, analyze and compare a face to a database of stored images. The basic processes used by the FaceIt system to capture and compare images are:
Detection - When the system is attached to a video surveillance system, the recognition software searches the field of view of a video camera for faces. If there is a face in the view, it is detected within a fraction of a second. A multi-scale algorithm is used to search for faces in low resolution. The system switches to a high-resolution search only after a head-like shape is detected.
2. Alignment - Once a face is detected, the system determines the head's position, size and pose. A face needs to be turned at least 35 degrees toward the camera for the system to register it.
3. Normalization -The image of the head is scaled and rotated so that it can be registered and mapped into an appropriate size and pose. Normalization is performed regardless of the head's location and distance from the camera. Light does not impact the normalization process.
4. Representation - The system translates the facial data into a unique code. This coding process allows for easier comparison of the newly acquired facial data to stored facial data.
5. Matching - The newly acquired facial data is compared to the stored data and (ideally) linked to at least one stored facial representation.

Definition

Internet is a network of networks in which various computers connect each other through out the world. The connection to other computers is possible with the help of ISP (Internet Service Provider). Each Internet users depend dialup connections to connect to Internet. This has many disadvantages like very poor speed, may time cut downs etc. To solve the problem, Internet data can be transferred through Cable networks wired to the user computer. Different type connections used are PSTN connection, ISDN connection and Internet via Cable networks. Various advantages are High availability, High bandwidth to low cost, high speed data access, always on connectivity etc.

The huge growth in the number of Internet users every year has resulted in the traffic congestion on the net, resulting in slower and expensive Internet access. As cable TV has a strong reach to homes, it is the best medium for providing the Internet to house - holds with faster access at feasible rates.

We are witnessing an unprecedented demand from residential and business customers, especially in the last few years, for access to the Internet, corporate intranets and various online information services. The Internet revolution is sweeping the country with a burgeoning number of the Internet users. As more and more people are being attracted towards the Internet, traffic congestion on the Net is continuously increasing due to limited bandwidths resulting in slower and expensive Internet access.

The number of household getting on the Internet has increased exponentially in the recent past. First time internet users are amazed at the internet's richness of content and personalization, never before offered by any other medium. But this initial awe last only till they experienced the slow speed of internet content deliver. Hence the popular reference "World Wide Wait"(not world wide web). There is a pent-up demand for the high-speed (or broad band) internet access for fast web browsing and more effective telecommuting.

India has a cable penetration of 80 million homes, offering a vast network for leveraging the internet access. Cable TV has a strong reach to the homes and therefore offering the Internet through cable could be a scope for furthering the growth of internet usage in the homes.

The cable is an alternative medium for delivering the Internet services in the US, there are already a million homes with cable modems, enabling the high-speed internet access over cable. In India, we are in the initial stages. We are experiencing innumerable local problems in Mumbai, Bangalore and Delhi, along with an acute shortage of international Internet connectivity.

Accessing the Internet on the public switched telephone networks (PSTN) still has a lot of problems. Such as drops outs. Its takes along time to download or upload large files. One has to pay both for the Internet connectivity as well as for telephone usages during that period. Since it is technically possible to offer higher bandwidth by their cable, home as well as corporate users may make like it. Many people cannot afford a PC At their premises. Hardware obsolescence in the main problem to the home user. Who cannot afford to upgrade his PC every year? Cable TV based ISP solution s offer an economic alternative.

Definition

Criminals have long employed the tactic of masking their true identity, from disguises to aliases to caller-id blocking. It should come as no surprise then, that criminals who conduct their nefarious activities on networks and computers should employ such techniques. IP spoofing is one of the most common forms of on-line camouflage. In IP spoofing, an attacker gains unauthorized access to a computer or a network by making it appear that a malicious message has come from a trusted machine by "spoofing" the IP address of that machine. In the subsequent pages of this report, we will examine the concepts of IP spoofing: why it is possible, how it works, what it is used for and how to defend against it.

Brief History of IP Spoofing

The concept of IP spoofing was initially discussed in academic circles in the 1980's. In the April 1989 article entitled: "Security Problems in the TCP/IP Protocol Suite", author S. M Bellovin of AT & T Bell labs was among the first to identify IP spoofing as a real risk to computer networks. Bellovin describes how Robert Morris, creator of the now infamous Internet Worm, figured out how TCP created sequence numbers and forged a TCP packet sequence. This TCP packet included the destination address of his "victim" and using an IP spoofing attack Morris was able to obtain root access to his targeted system without a User ID or password. Another infamous attack, Kevin Mitnick's Christmas Day crack of Tsutomu Shimomura's machine, employed the IP spoofing and TCP sequence prediction techniques. While the popularity of such cracks has decreased due to the demise of the services they exploited, spoofing can still be used and needs to be addressed by all security administrators. A common misconception is that "IP spoofing" can be used to hide your IP address while surfing the Internet, chatting on-line, sending e-mail, and so forth. This is generally not true. Forging the source IP address causes the responses to be misdirected, meaning you cannot create a normal network connection. However, IP spoofing is an integral part of many network attacks that do not need to see responses (blind spoofing).

2. TCP/IP PROTOCOL Suite

IP Spoofing exploits the flaws in TCP/IP protocol suite. In order to completely understand how these attacks can take place, one must examine the structure of the TCP/IP protocol suite. A basic understanding of these headers and network exchanges is crucial to the process.

2.1 Internet Protocol - IP

The Internet Protocol (or IP as it generally known), is the network layer of the Internet. IP provides a connection-less service. The job of IP is to route and send a packet to the packet's destination. IP provides no guarantee whatsoever, for the packets it tries to deliver. The IP packets are usually termed datagrams. The datagrams go through a series of routers before they reach the destination. At each node that the datagram passes through, the node determines the next hop for the datagram and routes it to the next hop. Since the network is dynamic, it is possible that two datagrams from the same source take different paths to make it to the destination. Since the network has variable delays, it is not guaranteed that the datagrams will be received in sequence. IP only tries for a best-effort delivery. It does not take care of lost packets; this is left to the higher layer protocols. There is no state maintained between two datagrams; in other words, IP is connection-less.

INTRODUCTION


The ease of use provided by the classic pen-and-paper "interface" is unsurpassed. Except for writing long text passages, most of us regard it as the most convenient way of expressing, conveying, and storing one's thoughts and ideas. However, pen and paper are hardly supported by current information technology. The integration of paper-based devices and modern computers has been hampered by two problems: the difficulty of recording handwriting without affecting the natural look and feel of pen and paper, and the insufficient recognition rates provided by handwriting recognizers. However, considerable progress has been made over the recent years. While new and highly sophisticated pen-based hardware is announced almost monthly nowadays, handwriting recognition rates are steadily improving and will soon reach a level of common acceptance. These developments pave the way for a better integration of pen and paper into the daily workflow.


The main idea of the concept described in the following sections is to generate a so-called model file for all paper documents expecting handwritten input. A model file describes the structure of a document and provides the context knowledge necessary for handwriting recognition. Moreover, it contains information about how the recognized data should be processed, including its final destination. A unique ID printed on each document specifies the corresponding model file. Writers can access the model file of a document via the Internet, under the number specified on the document. In practice, this means calling the (phone) number of a server dispatching model files, which motivates the name of the concept "Callpaper" - a very transparent concept for the writer. The Callpaper concept nicely supports the simultaneous generation of both paper copies and electronic copies of the same document, with-out imposing any additional load on a writer. It thus introduces the benefits of information technology into paper-based processes without affecting the traditional workflow.


PEN AND PAPER


What are the advantages and disadvantages of paper when compared to the devices of modern information technology? Though this seems to be an easy question, we must now admit that we have not answered it properly during the recent years, considering the fact that many of us were predicting an ever decreasing paper consumption caused by the technological progress. In fact, the advantages of paper have been grossly underestimated, in particular when it is utilized in combination with pens. Of course, the disadvantages of paper are obvious:


" Limited feedback on user input,

" Limited access and search functions,

" Slow transfer et cetera.


And yet, paper combines features that no other hardware device can offer.


" Paper has a high resolution and is thus pleasant to read.

" Paper can be easily folded and crushed.

" Paper is very cheap,

" Paper provides us with a strikingly simple way of deleting data, namely disposing old paper and replacing it by new one.

" Paper supports fast input of both textual as well as graphical data.

" Paper does not re-quire tedious eye-hand coordination since the cursor is directly under the pen tip.

INTRODUCTION

Nature is a great place to go for inspiration when you want to see systems that are robust and have been around for millions of years. Nature provides the inspiration for swarm intelligence. Look at the emergent behavior observed in ants, termites, bees and others. We see very simple creatures, performing complex behavior as a group

Consider the case of ant colony working together .

The behaviour of ants has long fascinated scientists. These insects have the strength to carry food up to seven times their own body weight, and set up amazingly complex colonies, with social 'castes' in which every member has a role.
In fact, ants are not only fascinating just to entomologists looking at them under the microscope. In recent years, computer scientists have been paying great attention to the way in which a colony of ants can solve complex problems; in particular, how it finds the shortest route to a food source.

Each insect in a colony seemed to have its own agenda, and yet the group as a whole appeared to be highly organized. This organization was not achieved under supervision, but through interaction among individuals. This was most apparent in the way in which ants travel to and from a food source.

Ants form and maintain a line to their food source by laying a trail of pheromone, i.e. a chemical to which other members of the same species are very sensitive. They deposit a certain amount of pheromone while walking, and each ant prefers to follow a direction rich in pheromone. This enables the ant colony to quickly find the shortest route. The first ants to return should normally be those on the shortest route, so this will be the first to be doubly marked by pheromone (once in each direction). Thus other ants will be more attracted to this route than to longer ones not yet doubly marked, which means it will become even more strongly marked with pheromone.

Thus, the shortest route is doubly marked, and more ants will follow it. This simple model finds the shortest route between the nest and a food source. Allowing the pheromone trail to "evaporate" (as in nature) provides the ants a mechanism to explore for alternate food sources when the first is depleting and for alternate routes should the first become blocked. Studying this uncanny skill has enabled researchers to create software agents capable of solving complex IT problems.This forms the basic idea behind SWARM INTELLIGENCE

CHARATERISTICS OF SWARM

" Distributed, no central control or data source;
" No (explicit) model of the environment;
" Perception of environment, I.e. sensing;
" Ability to change environment

TRAVELING SALES ANT

In the traveling salesman problem,a person must find the shortest route by which to vcisit a given number of cities,each exactly once.The classic problem is devilishly difficult:for just 15 cities there are billions of route possiblities.

Recently researchers have begun to experiment with antlike agents to derive a solution.The approach relies on the artificial ants laying and following equivalent of pheromone trails

INTRODUCTION

The L18 flash memory device provides read-while-write and read-while-erase capability with density upgrades through 256-Mbit. This family of devices provides high performance at low voltage on a 16-bit data bus. Individually erasable memory blocks are sized for optimum code and data storage. Each device density contains one parameter partition and several main partitions. The flash memory array is grouped into multiple 8-Mbit partitions. By dividing the flash memory into partitions, program or erase operations can take place at the same time as read operations.

Although each partition has write, erase and burst read capabilities, simultaneous operation is limited to write or erase in one partition while other partitions are in read mode. The L18 flash memory device allows burst reads that cross partition boundaries. User application code is responsible for ensuring that burst reads don't cross into a partition that is programming or erasing. Upon initial power up or return from reset, the device defaults to asynchronous page-mode read. Configuring the Read Configuration Register enables synchronous burst-mode reads. In synchronous burst mode, output data is synchronized with a user-supplied clock signal. A WAIT signal provides easy CPU-to-flash memory synchronization. In addition to the enhanced architecture and interface, the L18 flash memory device incorporates technology that enables fast factory program and erase operations. Designed for low-voltage systems, the L18 flash memory device supports read operations with VCC at 1.8 volt, and erase and program operations with VPP at 1.8 V or 9.0 V


MEMORY

In order to enable computers to work faster, there are several types of memory available today. Within a single computer there are more than one type of memory.

TYPES OF RAM

The RAM family includes two important memory devices: static RAM (SRAM) and dynamic RAM (DRAM). The primary difference between them is the lifetime of the data they store. SRAM retains its contents as long as electrical power is applied to the chip. If the power is turned off or lost temporarily, its contents will be lost forever. DRAM, on the other hand, has an extremely short data lifetime-typically about four milliseconds. This is true even when power is applied constantly.

In short, SRAM has all the properties of the memory you think of when you hear the word RAM. Compared to that, DRAM seems useless. However, a simple piece of hardware called a DRAM controller can be used to make DRAM behave more like SRAM. The job of the DRAM controller is to periodically refresh the data stored in the DRAM. By refreshing the data before it expires, the contents of memory can be kept alive for as long as they are needed. So DRAM is also as useful as SRAM.

When deciding which type of RAM to use, a system designer must consider access time and cost. SRAM devices offer extremely fast access times (approximately four times faster than DRAM) but are much more expensive to produce. Generally, SRAM is used only where access speed is extremely important. A lower cost-per-byte makes DRAM attractive whenever large amounts of RAM are required. Many embedded systems include both types: a small block of SRAM (a few kilobytes) along a critical data path and a much larger block of dynamic random access memory (perhaps even in Megabytes) for everything else.

INTRODUCTION

The use of Internet becomes increased and the transmission gap become increased so in order to increase the speed special processors are used here mainly explaining the TRIPOD registers structure .By using this register we can increase the speed. Here mainly implement the function such as protocol processing, memory management and scheduling. The architecture of the TRIPOD has three register files that establish a pipeline to improve protocol processing performance. The first register processing the IP header, the other two register files are respectively loading and storing packet headers. The packet processing is used for demanding operations. Independently of such special operations, multiple PEs enable parallel execution of several instructions per packet, multithreading supports the assignment of one thread per packet to achieve fast context switching. Tramance subsystems to implement such functions as protocol processing, memory management, and scheduling.

PROTOCOLS
Protocol is an agreement between user and interface. The basic mechanism for transmitting information and for the receiver to detect the presence of any transmission errors. When a transmission error is detected, even if it is only a single bit, then the complete data block must be discarded. This type of scheme is thus known as best try transmission or connectionless transmission.

CLOSING THE GAP

Network systems have employed embedded processors to offload protocol processing and computationally expensive operations for more than a decade. In the past few years, however, the computer industry has been developing specialized network processors to close the transmission-processing gap in network systems.

Today, network processors are an important class of embedded processors, used all across the network systems space from personal to local and wide area networks. They accommodate both the Internet's explosive growth and the proliferation of network-centric system architectures in environments ranging from embedded networks for cars and surveillance systems to mobile enterprise and consumer networks.

PROTOCOL PROCESSING

The protocol processing is oblivious to register file management because the register file structure is transparent to the protocol code. The transparency originates in a simple mechanism that changes the working of register file.

MEMORY MANAGEMENT
The memory system includes all parts of the computer that store information. It consists of primary and secondary memory. The primary memory can be referenced one byte at a time, has relatively fast access time, and is usually volatile. The secondary memory refers to collection of storage devices. The modern memory managers automatically transfer information back and forth between the primary and secondary memory using virtual memory.
SCHEDULING
CPU scheduling refers to the task of managing CPU sharing among a community of processors. The scheduling policy would be selected by each system administrator so it reflects the way that particular computer will be used. Two types of scheduling are preemptive and non-preemptive scheduling. in preemptive scheduling use the interval timer and scheduler to interrupt a running process in order to reallocate the CPU to a higher priority ready process. In non-preemptive scheduling algorithm allow a process to run to completion once it obtains the processor

INTRODUCTION

Like in most other areas, space research is also moving into large-scale simulations using powerful computers. In fact, given the high cost, and often the impracticability of conducting live experiments, space research has moved into computer-based simulation long before most other streams.

This idea of taking the Internet to the space comes from the need for a low cost, high reliability inter-planetary network. When countries started sending probes into space, each used a unique set of protocols to communicate with earth. This was done using Deep Space Network (DSN). Since communication was done with common ground station, need for a common protocol increased with time. Taking the Internet to space is the offshoot of this need for standardization. The Inter-Planetary Network (IPN), a part of Jet Propulsion Laboratory (JPL), is managing this program.

Satellite images of earth have been easily and commercially available for sometime now. These images are archived and distributed in various formats. Satellite images use file formats that can save additional information used for computation.

Each satellite used customized systems for communication. For the sake of interpretability with other systems, a set of protocols were designed, specified and implemented. These protocols are currently being tested and they are called Space Communications Protocols Standards (SCPS). These are the protocols used for space communication.

SHIFTING INTERNET TO SPACE

" Earlier, satellites used customized system for communication using DSN.
" With cooperation among nations and agencies, interpretability becomes important.
" NASA, US Defense department and National Security Agency of the US, jointly designed, specified, implemented and are testing a set of protocols called Space Communication Protocols Standards (SCPS).

NEED FOR A STANDARD PROTOCOL

There were problems in integrating the networks with DSN. The reasons which caused the evolution of standard protocol are:

1. Probes of each country used unique set of protocol. Since the probes communicated with same ground-station, need for a common protocol increased.
2. Cooperation among agencies and countries is increasing. But since each country used different standards, this made the problem worse.
3. Need for a low-cost, high-reliability inter-planetary network.
4. The existing Internet protocols were not sufficient for the Space communication. The error rate and round trip delay were the main factors.

WHY NOT TCP/IP?

When a common standard was required for space communication, the first plan was to use the existing TCP/IP stack protocol. But it was not practical.

A key limitation with TCP in high bit error networks is the lack of error correction capabilities. Since TCP cannot correct bit errors, if even a single bit within a packet is corrupted in transit, the receiver will discard the entire packet. This turns bit errors into packet loss.

In addition, TCP can recover from the loss of only one packet per round trip. If the network's round-trip time is 500 milliseconds, then TCP can tolerate only one packet loss per 500 milliseconds. To illustrate the implication of this limitation, consider what happens if this network has a bit error rate of 10-5: TCP can send data at a maximum rate of 200 kbps, no matter how fast the physical network is!

Another limitation of TCP is the strength of its data corruption protection. TCP uses a relatively weak checksum scheme to detect bit errors in each packet. The approach fails to detect bit errors relatively often at high bit error rates, allowing corrupted data to be delivered to the application undetected.It has been calculated that Window Based based TCP is not suitable for RTT = 40 min 20B/s throughput on 1Mb/s link.

INTRODUCTION

The ability to rapidly create, deploy, and manage new network services in response to user demands presents a significant challenge to the research community and is a key factor driving the development of programmable networks. Existing network architectures such as the Internet, mobile, telephone, and asynchronous transfer mode (ATM) exhibit two key limitations that prevent us from meeting this challenge:

" Lack of intrinsic architectural flexibility in adapting to new user needs and requirements
" Lack of automation of the process of realization and deployment of new and distinct network architectures

In what follows we make a number of observations about the limitations encountered when designing and deploying network architectures. First, current network architectures are deployed on top of a multitude of networking technologies such as land-based, wireless, mobile, and satellite for a bewildering array of voice, video, and data applications. Since these architectures offer a very limited capability to match the many environments and applications, the deployment of these architectures has predictably met with various degrees of success.

Tremendous difficulties arise, for example, because of the inability of TCP to match the high loss rate encountered in wireless networks or for mobile IP to provide fast handoff capabilities with low loss rates to mobile devices. Protocols other than mobile IP and TCP operating in wireless access networks might help, but their implementation is difficult to realize. Second, the interface between the network and the service architecture responsible for basic communication services (e.g., connection setup procedures in ATM and telephone networks) is rigidly defined and cannot be replaced, modified, or supplemented. In other cases, such as the Internet, end user connectivity abstractions provide little support for quality of service (QoS) guarantees and accounting for usage of network resources (billing). Third, the creation and deployment of network architecture is a manual, time-consuming, and costly process.

In response to these limitations, we argue that there is a need to propose, investigate, and evaluate alternative network architectures to the existing ones (e.g., IP, ATM, mobile). This challenge goes beyond the proposal for yet experimental network architecture. Rather, it calls for new approaches to the way we design, develop, deploy, observe, and analyze new network architectures in response to future needs and requirements. We believe that the design, deployment, architecting, and management of new network architectures should be automated and built on a foundation of spawning networks, a new class of open programmable networks.

We describe the process of automating the creation and deployment of new network architectures as spawning. The term spawning finds a parallel with an operating system spawning a child process. By spawning a process the operating system creates a copy of the calling process. The calling process is known as the parent process and the new process as the child process. Notably, the child process inherits its parent's attributes, typically executing on the same hardware (i.e., the same processor). We envision spawning networks as having the capability to spawn not processes but complex network architectures. Spawning networks support the deployment of programmable virtual networks.

We call a virtual network installed on top of a set of network resources a parent virtual network. We propose the realization of parent virtual networks with the capability of creating child virtual networks operating on a subset of network resources and topology, as illustrated in Fig. 1. . For example, part of an access network to a wired network might be redeployed as a Pico cellular virtual network that supports fast handoff (e.g., by spawning a Cellular IP virtual network), as illustrated in Fig. 1. In this case the access network is the parent and the Cellular IP network the child. We describe a framework for spawning networks based on the design of the Genesis Kernel, a virtual network operating system capable of automating a virtual network life cycle process; that is, profiling, spawning, architecting, and managing programmable network architectures on demand

INTRODUCTION

Organizations are seeking to extend their enterprises and provide knowledge workers with ever-greater mobility and access to information and applications. Powerful new computing and communications devices along with wireless networks are helping provide that mobility. This has sparked the creation of "smart clients," or applications and devices that can take advantage of the power of local processing but have the flexibility of Web-based computing.

Smart clients are computers, devices, or applications that can provide:
1. The best aspects of traditional desktop applications, including highly responsive software, sophisticated features for users, and great opportunities for developers to enhance existing applications or create new ones.
2. The best aspects of "thin clients," including a small form factor, economical use of computing resources such as processors and memory, ease of deployment, and easy manageability.
3. A natural, easily understood user interface (UI) that is high quality and designed for occasional connectivity with other systems.
4. Interoperability with many different types of devices.
5. The ability to consume Web services.
Organizations can start building and using smart client applications today with a rich array of Microsoft products and tools that eliminate barriers to developing and deploying smart clients. These tools include:
1. .NET Framework
2. Compact Framework.
3. Visual Studio .NET
4. Windows client operating systems
5. Windows Server 2003

RICH CLIENTS, THIN CLIENTS AND SMART CLIENTS

Rich Clients
Rich clients are the usual programs running on a PC locally. They take advantage of the local hardware resources and the features of the client operating system platform. They have the following advantages:
1. use local Resources
2. provides rich user interface
3. offline capable
4. high productivity
5. responsive and flexible

Despite the impressive functionality of many of these applications, they have limitations. Many of these applications are stand-alone and operate on the client computer, with little or no awareness of the environment in which they operate. This environment includes the other computers and any services on the network, as well as any other applications on the user's computer. Very often, integration between applications is limited to using the cut or copy and paste features provided by Windows to transfer small amounts of data between applications. They have the following limitations:
1. Tough to deploy and update: Since no network connection is available the applications have to be installed separately on each system using a removable storage device.
2. "DLL Hell" (Application Fragility): When a new application is installed, it may replace a shared DLL with a newer version which is incompatible to an existing application, thereby breaking it.

Thin Clients
The Internet provides an alternative to the traditional rich client model that solves many of the problems associated with application deployment and maintenance. Thin client, browser-based applications are deployed and updated on a central Web server; therefore, they remove the need to explicitly deploy and manage any part of the application to the client computer.

Thin clients have the following advantages:
1. Easy to deploy and update: The application can be downloaded over the internet if the URL is provided. Updating can also be done at regular intervals over the internet.
2. Easy to manage: All the data is managed on a single server, with thin clients accessing the data over the internet - providing ease of data management and administration.

Despite the distributed functionality provided by thin clients, they also have some disadvantages. These are:
1. Network dependency: The browser must have a network connection at all times. This means that mobile users have no access to applications if they are disconnected, so they must reenter data when they return to the office.
2. Poor user experience: Common application features such as drag-and-drop, undo-redo, and context-sensitive help may be unavailable, which can reduce the usability of the application.

Because the vast majority of the application logic and state lives on the server, thin clients make frequent requests back to the server for data and processing. The browser must wait for a response before the user can continue to use the application; therefore, the application will typically be much less responsive than an equivalent rich client application. This problem is exacerbated in low bandwidth or high latency conditions, and the resulting performance problems can lead to a significant reduction in application usability and user efficiency

INTRODUCTION

Satellites are ideal for providing internet and private network access over long distance and to remote locations. However the internet protocols are not optimized for satellite conditions and consequently the throughput over the satellite networks is restricted to only a fraction of available bandwidth. We can over come these restrictions by using the Sky X protocol.

The Sky X Gateway and Sky X Client/Servers systems replaces TCP over satellite link with a protocol optimized for the long latency, high loss and asymmetric bandwidth conditions of the typical satellite communication. Adding the Sky X system to a satellite network allows users to take full advantage of the available bandwidth. The Sky X Gateway transparently enhances the performance of all users on a satellite network without any modifications to the end clients and servers. The Sky X Client and the Sky X Server enhance the performance of data transmissions over satellites directly to end user PC's, thereby increasing Web performance by 3 times or more and file transfer speeds by 10 to 100 times. The Sky X solution is entirely transparent to end users, works with all TCP applications and does not require any modifications to end client and servers.

Sky X products are the leading implementation of a class of products known variously as protocol gateway TCP Performance Enhancing Proxy (TCP/PEP) , or satellite spoofer.The Sky X gateways are available as ready to install hardware solutions which can be added to any satellite network.


The Sky X family consists of the Sky X Gateway, Sky x Client/Server and the sky X OEM products. The Sky X Gateway is a hardware solution designed for easy installation into any satellite network and provides performance enhancement for all devices on the network. The Sky X Client/Server provides performance enhancement to individual PC's.


PERFORMANCE OF TCP OVER SATELLITE

Satellites are an attractive option for carrying Internet and other IP traffic to many locations across the globe where terrestrial options are limited or price prohibitive. However data networking over satellites is faced with overcoming the latency and high bit error rates typical of satellite communications, as well as the asymmetric bandwidth of most satellite networks.

Communication over geosynchronous satellites, orbiting at an altitude of 22,300 miles has round trip times of approximately 540 m/s, an order of magnitude larger than terrestrial networks. The journey through the atmosphere can also introduce bit errors into the data stream. These factors, combined with backchannel bandwidth typically much smaller than that available on the forward channel, reduce the effectiveness of TCP which is optimized for short hops over low-loss cables or fiber.Eventhough the TCP is very effective in the local network connected by using cables or optical fibers by using its many features such as LPV6, LPsec and other leading-edge functionality. Also it will work with real time operating systems.TCP is designed for efficiency and high performance ,and optimized for maximum throughput and the highest transaction speeds in local networks.

But the satellite conditions adversely interact with a number of elements of the TCP architecture, including it s window sizing, congestion avoidance algorithms, and data acknowledgment mechanisms, which contribute to severely constrict the data throughput that can be achieved over satellite links. Thus the advantages achieved by TCP in LAN's are no longer effective in the satellite link. So it is desirable to design a separate protocol for communication through the satellite to eliminate the disadvantages of using TCP over the satellite link

INTRODUCTION

Reconfigurable computing technology is the ability to modify a computer system's hardware architecture in real time. Although originally proposed in the late 1960s by a researcher at UCLA, reconfigurable computing is a relatively new field of study. The decades long delay had mostly to do with a lack of acceptable reconfigurable hardware. Interest in this field was first triggered- off late in 2002 when a small Silicon Valley start up called Quick Silver Technologies announced what it called the Adaptive Computing Machine(ACM), a new class of digital integrated circuit that can be embedded directly into a mobile device and will enable hardware to be programmed almost as if it were a piece of software, For example, take 3 common applications that the average mobile phone performs seamlessly: search for a local cellphone; verify whether the number represents an authorized user then make the connection. Today the 3 operations are performed by 3 different chips inside the handset. With the new adaptive technology, a single chip can be reconfigured by a software instruction to assume different hardware functions and to perform all 3 applications one after the another

The earliest reconfigurable computing systems predate even digital computers. Before digital logic scientific and engineering computations were done on programmable analog computers: big banks of op amps, comparators, multipliers and passive components interconnected via a plug board and patch cords. By connecting components together, the very clever user could implement a network whose node obeyed a set of differential equation solver, capable of deployment- time reconfigurability. Toward the end of its era, the analog computer was combined with relay banks, and later with digital computers, to form hybrids. These machines could reconfigure themselves between execution sequences, providing an early form of yet another category of configurability. Some hybrid computer programmers become experts at juggling configurations while holding data in sample and to extend the range of these systems

The first moves toward really fluid reconfigurability came with the advent of embeddable digital computers. With the characteristics of a system defined by software in RAM, nothing could be simpler. Changing the operation of the system at installation, in response to changing data or even on the fly, is the matter of loading a different application. Variants on this theme included tightly coupled networks of computers in which the network topology could adapt to changing data flows, and even computers that could change their instruction sets in response to changing application demands.

But the first explorations into what most people today mean by the term reconfigurable computing came after the development of large SRAM- based FPGAs. The devices provided a fabric of logic cells and interconnects that could be altered- albeit with some difficulty - to create just about any logic netlist that would fit into the chip. Researches quickly seized upon the parts and began experimenting with deployment tie reconfiguration creating a hardwired digital network designed for a specific algorithm.

Experiments with reconfigurability in FPGAs identified two promising advantages: reduction of size or power consumption of the hardware, and increases in performance. Often the two types of advantages came together, rather than separately. The advantages, it turned out, came with only few quite specific techniques. One of these was simple: reuse of hardware. If it is organize a system in such a way that it has several distinct, non overlappimg operating modes, then you can save hardware by configuring a programmable fabric to execute in one mode, stopping, then configuring it to operate in another mode

A number of companies are currently working in this area. Most of the big players in the conventional DSP/ASIC area -Texas ,IBM, Motorola, Intel- are known to be working overtime to come up with reconfigurable designs of their own.

RECONFIGURABLE COMPUTING SYSTEMS

Current computers are fixed hardware systems based upon microprocessors. As powerful as the microprocessor is, it must handle far more functions than just the application at hand. With each new generation of microprocessors, the applications performance increases only incrementally.In many cases the application must be rewritten to achieve this incremental performance enhancement. Traditional fixed hardware may be classified into three categories: Logic(Gate Arrays, PALS etc.), Embedded control (controllers eg ASICs & Custom VLSI Devices) and Computers(Microprocessors eg x 86, 68000, Power PC).

Reconfigurable Computing Systems are those computing platforms whose architecture can be modified by the software to suit the application at hand. To get the maximum through put, an algorithm must be placed in hardware ( eg. ASIC, DSP, etc) Dramatic performance gains are obtained through the 'hardwiring' of the algorithm. In a recofigurable computing system, the "Hardwaring takes place on a function by function basis as the application executes

INTRODUCTION

Today's personal computing environment is built on flexible, extensible, and feature-rich platforms that enable consumers to take advantage of a wide variety of devices, applications, and services. Unfortunately, the evolution of shared networks and the Internet has made computers more susceptible to attacks at the hardware, software, and operating system levels. Increasing existing security measures, such as adding more firewalls and creating password protection schemes, can slow data delivery and frustrate users. Using only software-based security measures to protect existing computers is starting to reach the point of diminishing returns.

These new problems have created the need for a trustworthy computing platform. Users want computers that provide both ease-of-use and protection from malicious programs that can damage their computers or access their personal information. Because they use their computers to process and store more and more valuable and important data, users need a platform that addresses their data security, personal privacy, and system integrity needs.

The next-generation secure computing base (NGSCB) is a combination of new hardware and operating system features that provides a solid foundation on which privacy- and security-sensitive software can be built. NGSCB does not affect the software running in the main operating system; rather, NGSCB-capable computers provide an isolated execution With NGSCB-capable computers, users can choose to work within the standard operating system environment using their existing applications, services, and devices without any changes, or they can choose to run critical processes by using NGSCB-trusted components that exist in a separate, protected operating environment.

NGSCB FUNDAMENTALS

On commercial computer platforms, it is not feasible to restrict the firmware, device hardware, drivers, and applications sufficiently to provide adequate process isolation. NGSCB avoids this conflict by allowing both secure and mainstream operating systems to coexist on the same computer.

Only an NGSCB-trusted application, also called a nexus computing agent (NCA), can run securely within the protected operating environment. The user defines specific policies that determine which trusted applications can run in the protected operating environment. The program code does not need to be signed in order to run on an NGSCB-capable computer.

The following core elements provide the protected operating environment for trusted applications:

Strong process isolation

The protected operating environment isolates a secure area of memory that is used to process data with higher security requirements.

Sealed storage

This storage mechanism uses encryption to help ensure the privacy of NGSCB data that persists on the hard disk of NGSCB-capable computers.


Attestation

This occurs when a piece of code digitally signs and attests to a piece of data, helping to confirm to the recipient that the data was constructed by a cryptographically identifiable software stack.

Secure paths to the user

By encrypting input and output, the system creates a secure path from the keyboard and mouse to trusted applications and from those applications to a region of the computer screen. These secure paths ensure that valuable information remains private and unaltered.

Microsoft is initially designing NGSCB features and services for the next 32-bit version of the Windows operating system, and plans are underway to support other platforms as well.

Strong Process Isolation

In NGSCB, the protected operating environment provides a restricted and protected address space for applications and services that have higher security requirements. The primary feature of the protected operating environment is curtained memory, a secure area of memory within an otherwise open operating system.

INTRODUCTION

Microsoft's next-generation secure computing base aims to provide robust access control while retaining the openness of personal computers. Unlike closed systems, an NGSCB platform can run any software, but it provides mechanisms that allow operating systems and applications to protect themselves against other software running on the same machine. For example, it can make home finance data inaccessible to programs that the user has not specifically authorized.

To enable this mode of operation, NGSCB platforms implement
" Isolation among operating systems and among processes. OS isolation is related to virtual machine monitors. However, some key NGSCB innovations make it more robust than traditional VMMs by enabling a small machine monitor to isolate itself and other high-assurance components from the basic input/output system (BIOS), device drivers, and bus master devices.
" Hardware and software security primitives that allow software modules to keep secrets and authenticate themselves to local and remote entities. These primitives maintain the trustworthiness of OS access protections without preventing the platform from booting other operating systems.

We refer to a security regimen that allows any software to run but requires it to be identified in access-control decisions as authenticated operation, and we call a hardware-software platform that supports authenticated operation a trusted open system.

A variety of commercial requirements and security goals guided the NGSCB system design. The main commercial requirement was for an open architecture that allows arbitrary hardware peripherals to be added to the platform and arbitrary software to execute without involving a central authority. Furthermore, the system had to operate in the legacy environment of personal computers. While we introduced changes to core platform components, most of the PC architecture remained unmodified. The system had to be compatible with the majority of existing peripherals. Finally, the hardware changes had to be such that they would not have a significant impact

INTRODUCTION

Currently, most users think of computers as associated with their desktop appliances or with a server located in a dungeon in some mysterious basement. However, many of those same users may be considered to be nomads, in that they own computers and communication devices that they carry about with them in their travels as they move between office, home, airplane, hotel, automobile, branch office, and so on. Moreover, even without portable computers or communications, there are many who travel to numerous locations in their business and personal lives and who require access to computers and communications when they arrive at their destinations.

Indeed, even a move from a desk to a conference table in the same office constitutes a nomadic move since the computing platforms and communications capability may be considerably different at the two locations. The variety of portable computers is impressive, ranging from laptop computers, notebook computers, and personal digital assistants (or personal information managers) to "smart" credit card devices and wristwatch computers. In addition, the communication capability of these portable computers is advancing at a dramatic pace from high-speed modems to PCMCIA modems, e-mail receivers on a card, spread-spectrum hand-held radios, CDPD transceivers, portable GPS receivers, and gigabit satellite access, and so on.

The combination of portable computing with portable communications is changing the way we think about information processing. We now recognize that access to computing and communications is necessary not only from "home base" but also while in transit and after reaching a destination.

These ideas form the essence of a major shift to nomadicity (nomadic computing and communications), which we address in this paper. The focus is on the system support needed to provide a rich set of capabilities and services, in a transparent and convenient form, to the nomad moving from place to place.

In this paper we propose location independent naming as a mechanism to support nomadic computing on the Internet. Nomadic computing is a limited, but common form of mobile computing.Nomadic users compute from different locations, but do not require network connectivity while they are moving. For example, a salesman will travel from customer to customer, using his laptop at each location but storing it while driving to the next location.

While Mobile IP was designed for continuously moving computers, it can also support nomadic users. Mobile IP allows users to maintain existing connections while traveling between networks by allowing machines to carry their IP address with them when they move to a new network. Unfortunately, preserving IP addresses across networks introduces several drawbacks, including the performance penalties of triangle routing; the security problems of IP tunneling through fire-walls, and the loss of connectivity due to packet loss from source address filters. The drawbacks are inherent to Mobile IP since it breaks the one-to-one mapping between IP address and network location that is used by the Internet to route packets to the correct destination.

Our approach, Location Independent Naming (LIN), allows a machine to keep the same name as it moves around the Internet by rebinding its name to its local address when it moves. Once this binding has been made, the machine can communicate with any other host on the Internet using standard IP routing, since the source address on its packets identifies the machine's actual location. Further, other machines on the Internet can communicate with this machine, since its host-name maps to its current address. While this machine remains at its current location, it behaves just like any other machine at that location - no special support is needed except to setup and teardown the name-to-address binding. Because LIN preserves the association between IP address and location, it avoids the performance and security drawbacks of Mobile IP for nomadic computers, as well as the complexities of optimizing Mobile IP.

LIN represents an advance of the state of the art as it allows correspondent hosts to communicate with a nomadic host using its well-known name without the performance penalties, security issues, or need for infrastructure support of Mobile IP. It leverages and extends existing and pro-posed functionality of DNS (Domain Name System) and DHCP (Dynamic Host Configuration Protocol) to allow a mobile host to keep its name while moving across security domains using existing trust relationships.

INTRODUCTION

Both large and small Internet Service Providers (ISPs) constantly face the challenges of adapting their networks to support rapid growth and customer demand for more reliable and differentiated services. Moreover, many carriers found it cost-effective to multiplex Internet traffic as one of many services carried over an ATM core. Recently, the growth of Internet services and Wavelength Division Multiplexing (WDM) technology at the fiber level have provided a viable alternative to ATM for multiplexing multiple services over individual circuits. In addition, the once faster and higher bandwidth ATM switches are being out-performed by Internet backbone routers. Equally important, Multiprotocol Label Switching (MPLS) offers simpler mechanisms for packet-oriented traffic engineering and multiservice functionality with the added benefit of greater scalability. MPLS emerged from the IETF's effort to standardize a number of proprietary multilayer switching solutions that were initially proposed in the mid-1990s. To help you appreciate the importance of MPLS and its impact on the Internet core, the first half of this paper describes the forces that motivated the development and evolution of these different solutions, focusing on the common features and design considerations shared by the different solutions-the complete separation of the control component from the forwarding component and the use of a label-swapping forwarding paradigm.

Perspective

Over the past few years, a number of new technologies have been designed to support Internet Service Providers (ISPs) as they try to keep a step ahead of the Internet's explosive growth. The latest technological advances include Internet backbone routers, new queuing and scheduling algorithms, IPSEC, web-caching services, directory services, and integrated routing/forwarding solutions. While all these technologies are critical for the successful operation and continued growth of the Internet, the evolution of routing functionality is essential if ISPs want to provide support for a new class of revenue-generating customer services.

Multiprotocol Label Switching (MPLS) is the latest step in the evolution of routing/forwarding technology for the core of the Internet. MPLS delivers a solution that seamlessly integrates the control of IP routing with the simplicity of Layer 2 switching. Furthermore, MPLS provides a foundation that supports the deployment of advanced routing services because it solves a number of complex problems:

" MPLS addresses the scalability issues associated with the currently deployed IP-over-ATM overlay model
" MPLS significantly reduces the complexity of network operation.
" MPLS facilitates the delivery of new routing capabilities that enhance conventional IP routing techniques.
" MPLS offers a standards-based solution that promotes multivendor interoperability.

MPLS emerged from the IETF's effort to standardize a set of proprietary multilayer switching solutions that were originally developed in the mid-1990s. To fully understand the essence of MPLS and its role in the Internet, it is valuable to look back and examine the forces that stimulated the development of these proprietary multilayer switching approaches and how they were ultimately integrated into MPLS

Terminology

Forwarding Equivalence Class (FEC):- A group of IP packets which are forwarded in the same manner (e.g. over the same path, with the same forwarding treatment)

INTRODUCTION

PHANToM, means Personal HAptic iNTerface Mechanism, was developed at MIT as a relatively low cost force feedback device for interacting with virtual objects. Phantom device is a robot arm that is attached to a computer and used as a pointer in three dimensions, like a mouse is used as a pointer in two dimensions.

ABOUT PHANTOM

The PHANToM interface's novelty lies in its small size, relatively low cost and its simplification of tactile information. Rather than displaying information from many different points, this haptic device provides high-fidelity feedback to simulate touching at a single point. It just like closing your eyes, holding a pen and touching everything in your office. You could actually tell a lot about those objects from that single point of contact. You'd recognize your computer keyboard, the monitor, the telephone, desktop and so on.

A Phantom device and the Phantom Force Feedback extension can also be used to trace paths and/or move models in the absence of volume data. Although there will not be force feedback in such cases, the increased degrees of freedom provided by the device as compared to a mouse can be very helpful. The Phantom Force Feedback extension of Chimera allows a Phantom device to be used to guide marker placement within volume data. It is generally used together with Volume Viewer and Volume Path Tracer. SensAble Technologies manufactures several models of the Phantom. The device is only supported on SGI and Windows platforms. SensAble Technologies has announced that in summer of 2002 it will add support for Linux and drop support for SGI. The least expensive model (about $10,000 in 2001), the Phantom Desktop, is described here.

To integrate the PHANToM with a projection screen virtual environment several obstacles need to be overcome. First, the PHANToM is essentially a desktop device. To use it in a larger environment the PHANToM must be made mobile and height adjustable to accommodate the user.

For this we use

Phantom Stand

The phantom stand was designed to permit positioning, height adjustment, and stable support of the PHANToM in the virtual environment. To avoid interference with the magnetic tracking system used in the environment, the phantom stand was constructed out of bonded PVC plastic and stainless steel hardware.

KEY BENEFITS

o High fidelity, 3D haptic feedback
o The ability to operate in an office/desktop environment
o Compatibility with standard PCs and UNIX workstations
o A universal design for a broad range of applications
o Low cost device
o Used to trace paths

INTRODUCTION

With the explosive growth of the Internet and its increasingly important role in our daily lives, traffic on the Internet is increasing dramatically, more than doubling every year. However, as demand and traffic increases, more and more sites are challenged to keep up, literally, particularly during peak periods of activity. Downtime or even delays can be disastrous, forcing customers and profits to go elsewhere. The solution? Redundancy, redundancy, and redundancy. Use hardware and software to build highly-available and highly-scalable network services.

Started in 1998, the Linux Virtual Server (LVS) project combines multiple physical servers into one virtual server, eliminating single points of failure (SPOF). Built with off-the-shelf components, LVS is already in use in some of the highest-trafficked sites on the Web. As more and more companies move their mission-critical applications onto the Internet, the demand for always-on services is growing. So too is the need for highly-available and highly-scalable network services. Yet the requirements for always-on service are quite onerous:

" The service must scale: when the service workload increases, the system must scale up to meet the requirements.
" The service must always be on and available, despite transient partial hardware and software failures.
" The system must be cost-effective: the whole system must be economical to build and expand.
" Although the whole system may be big in physical size, it should be easy to manage.

Clusters of servers, interconnected by a fast network, are emerging as a viable architecture for building a high-performance and highly-available service. This type of loosely-coupled architecture is more scalable, more cost-effective, and more reliable than a single processor system or a tightly-coupled multiprocessor system. However, there are challenges, including transparency and efficiency.

The Linux Virtual Server(LVS) is one solution that meets the requirements and challenges of providing an always-on service. In LVS, a cluster of Linux servers appear as a single (virtual) server on a single IP address. Client applications interact with the cluster as if it were a single, high-performance, and highly-available server. Inside the virtual server, LVS directs incoming network connections to the different servers according to scheduling algorithms. Scalability is achieved by transparently adding or removing nodes in the cluster. High availability is provided by detecting node or daemon failures and reconfiguring the system accordingly, on-the-fly.

LINUX VIRTUAL SERVER ARCHITECTURE

The three-tier architecture consists of:
" A load balancer, which serves as the front-end of the whole cluster system. It distributes requests from clients among a set of servers, and monitors the backend servers and the other, backup load balancer.
" A set of servers, running actual network services, such as Web, email, FTP and DNS.
" Shared storage, providing a shared storage space for the servers, making it easy for the servers to have the same content and provide consistent services.
" The load balancer, servers, and shared storage are usually connected by a high-speed network, such as 100 Mbps Ethernet or Gigabit Ethernet, so that the intranetwork does not become a bottleneck of the system as the cluster grows.

Custom Search

visitors in world

Tracker

Time

counter

Livetraffic feed

online users

Powered by Blogger.