Technology

present software technlogies


Introduction:The number of subscribers with wireless access to the Internet through laptops, personal digital assistants (PDAs), cellular phones, pagers, and other wireless devices is rapidly increasing. In 1998, 1.2 million people had wireless web access. IDC predicts that in 2003 the number of wireless Internet subscribers will be 40.4 million. Because this market is growing at such a fast rate, content providers see an opportunity to enter the market by forming partnerships with wireless carriers to deliver data applications to wireless devices. In fact, companies solely dedicated to this type of service are starting to appear.

Analysts predict that e-commerce will be a key application for wireless Internet access. Buying books, trading stocks, reserving hotel rooms and renting cars from anywhere will be easy and consumers will demand these types of services. IDC states that the wireless Internet transaction value in 1998 was $4.3 billion. This number is expected to increase to $38 billion in 2003. IDC predicts that carriers will eventually charge a flat monthly fee for wireless access. Fees for wireless access will drop to be equal to or less than voice services in the next few years, allowing most people to afford wireless access to the Internet.

Wireless Content: Internet Portals
The Strategis Group defines a wireless portal as "a customized point of entry through which a wireless subscriber can access a limited number of Internet sites and services." Many wireless carriers offer internet content to their subscribers through partnerships with some of the large internet content portal companies. For example, AT&T offers its wireless Internet Digital PocketNet subscribers content from ABCNews.com, Bloomberg.com, AOL, and ESPN.com. Sprint PCS partners with AOL, CNN.com, Amazon.com and The Weather Channel. Other wireless carriers, such as USWest and AirTouch, have similar deals. Wireless networks that transmit data at speeds equivalent to or under 56 Kbps, or narrowband networks, are currently more readily available today than wireless broadband networks. Data delivery to wireless devices will be restricted by narrowband networks. Access to graphics and content best accessed through high-speed connections will be limited. Instead, time-sensitive and personalized data delivery, as well as e-commerce activities, will fuel the initial drive for the wireless content market.

Wireless portals will be targeted toward broad consumer markets and toward vertical business markets. It is expected that portals will serve as personalized information aggregators for end-users. Corporate wireless portal solutions may offer secure end-to-end wireless connectivity for business end-users, similar to a wireline intranet. Analysts expect two types of consumer portals to appear. "Push" portals will enable the end-user to set up custom information that they would like delivered to them periodically. "Push and pull" portals will both deliver personalized content to the end-user and allow the end-user to search the portal for information. Corporate wireless portal solutions will provide customized services like time sheet and expense report monitoring and integration, billing capabilities, sales force automation, and access to inventory databases.

Definition
Bluetooth wireless technology is a cable replacement technology that provides wireless communication between portable devices, desktop devices and peripherals. It is used to swap data and synchronize files between devices without having to connect each other with cable. The wireless link has a range of 10m which offers the user mobility. This technology can be used to make wireless data connection to conventional local area networks (LAN) through an access point. There is no need for the user to open an application or press button to initiate a process. Bluetooth wireless technology is always on and runs in the background. Bluetooth devices scan for other Bluetooth devices and when these devices are in range they start to exchange messages so they can become aware of each others capabilities. These devices do not require a line of sight to transmit data with each other.

Within a few years about 80 percent of the mobile phones are expected to carry the Bluetooth chip. The Bluetooth transceiver operates in the globally available unlicensed ISM radio band of 2.4GHz. The ISM bands include frequency range at 902MHz to 928MHz and 2.4GHz to 2.484GHZ which do not require operator license from a regulatory agency. This means that Bluetooth technology can be used virtually anywhere in the world. Another type of wireless technology that is being used nowadays is infrared signals. The choice of using either one of the wireless technology will depend on the application for which it is being used. Bluetooth is an economical, wireless solution that is convenient, reliable, easy to use and operates over a longer distance than infrared. The initial development started in 1994 by Ericsson. Bluetooth now has a special interest group (SIG) which has 1800 companies worldwide. Bluetooth technology enables voice and data transmission in a short-range radio.

There is a wide range of devises which can be connected easily and quickly without the need for cables. Soon people world over will enjoy the convenience, speed and security of instant wireless connection. Bluetooth is expected to be embedded in hundreds of millions mobile phones, PCs, laptops and a whole range of other electronic devices in the next few years. This is mainly because of the elimination of cables and this makes the work environment look and feel comfortable and inviting.



Origin Of Bluetooth
In 1994, Ericsson Mobile Communication initiated a study to investigate the feasibility of a low power, low cost radio interface between mobile phones and their accessories. The aim of the study was to find a way to eliminate cables between mobile phones and PC cards, headsets, desktops and other devices. The study was part of a large project investigating how different communication devices could be connected to the cellular network via a mobile phone. Ericsson's work in this area caught the attention of IBM, Intel, Nokia, and Toshiba. The companies formed the special interest group (SIG) in May 1998, which grew to over 1500 member companies by April 2000. The company jointly developed the Bluethooth 1.0 specifications, which was released in July 1999.

The engineers at Ericsson code-named the new wireless technology Bluetooth to honor a 10th century Viking king in Denmark. Harald Bluetooth reigned from 940 to 985 and is credited not only with uniting that country, but with establishing Christianity there as well. Harald's name was actually Blåtand, which roughly translates into English as 'Bluetooth'. This has nothing to do with the color of his teeth- some claim he neither brushed, nor flossed. Blåtand actually referred to Harald's very dark hair, which was unusual for Viking. Other Viking states included Norway and Sweden, which is the connection with Ericsson (literally, Eric's son) and its selection of Bluetooth as the code- name for this wireless technology

Definition

The European Space Agency (ESA) has programmes underway to place Satellites carrying optical terminals in GEO orbit within the next decade. The first is the ARTEMIS technology demonstration satellite which carries both microwave and SILEX (Semiconductor Laser Intro satellite Link Experiment) optical interorbit communications terminal. SILEX employs direct detection and GaAIAs diode laser technology; the optical antenna is a 25cm diameter reflecting telescope.

The SILEX GEO terminal is capable of receiving data modulated on to an incoming laser beam at a bit rate of 50 Mbps and is equipped with a high power beacon for initial link acquisition together with a low divergence (and unmodulated) beam which is tracked by the communicating partner. ARTEMIS will be followed by the operational European data relay system (EDRS) which is planned to have data relay Satellites (DRS). These will also carry SILEX optical data relay terminals.

Once these elements of Europe's space Infrastructure are in place, these will be a need for optical communications terminals on LEO satellites which are capable of transmitting data to the GEO terminals. A wide range of LEO space craft is expected to fly within the next decade including earth observation and science, manned and military reconnaissance system.

The LEO terminal is referred to as a user terminal since it enables real time transfer of LEO instrument data back to the ground to a user access to the DRS s LEO instruments generate data over a range of bit rates extending of Mbps depending upon the function of the instrument. A significant proportion have data rates falling in the region around and below 2 Mbps. and the data would normally be transmitted via an S-brand microwave IOL

ESA initiated a development programme in 1992 for LEO optical IOL terminal targeted at the segment of the user community. This is known as SMALL OPTICAL USER TERMINALS (SOUT) with features of low mass, small size and compatibility with SILEX. The programme is in two phases. Phase I was to produce a terminal flight configuration and perform detailed subsystem design and modelling. Phase 2 which started in september 1993 is to build an elegant bread board of the complete terminal.


Definition

Although there is good road safety performance the number of people killed and injured on our roads remain unacceptably high. So the roads safety strategy was published or introduced to support the new casualty reduction targets. The road safety strategy includes all forms of invention based on the engineering and education and enforcement and recognizes that there are many different factors that lead to traffic collisions and casualties. The main reason is speed of vehicle. We use traffic lights and other traffic manager to reduce the speed. One among them is speed cameras.

Speed cameras on the side of urban and rural roads, usually placed to catch transgressors of the stipulated speed limit for that road. The speed cameras, the solely to identify and prosecute those drivers that pass by the them when exceed the stipulated speed limit.

At first glance this seemed to be reasonable that the road users do not exceed the speed limit must be a good thing because it increases road safety, reduces accidents and protect other road users and pedestrians.
So speed limits are good idea. To enforce these speed limit; laws are passed making speed an offence and signs are erected were of to indicate the maximum permissible speeds. The police can't be every where to enforce the speed limit and so enforcement cameras art director to do this work; on one who's got an ounce of Commons sense, the deliberately drive through speed camera in order fined and penalized .
So nearly everyone slowdown for the speed Camera. We finally have a solution to the speeding problem. Now if we are to assume that speed cameras are the only way to make driver's slowdown, and they work efficiently, then we would expect there to be a great number of these every were.

Definition

RF light sources follow the same principles of converting electrical power into visible radiation as conventional gas discharge lamps. The fundamental difference between RF lamps and conventional lamps is that RF lamps operate without electrodes .the presence of electrodes in conventional florescent and High Intensity Discharge lamps has put many restrictions on lamp design and performance and is a major factor limiting lamp life.

Recent progress in semiconductor power switching electronics, which is revolutionizing many factors of the electrical industry, and a better understanding of RF plasma characteristics, making it possible to drive lamps at high frequencies.The very first proposal for RF lighting, as well as the first patent on RF lamps, appeared about 100years ago, a half century before the basic principles lighting technology based on gas discharge had been developed.

Discharge tubes
Discharge Tube is the device in which a gas conducting an electric current emits visible light. It is usually a glass tube from which virtually all the air has been removed (producing a near vacuum), with electrodes at each end. When a high-voltage current is passed between the electrodes, the few remaining gas atoms (or some deliberately introduced ones) ionize and emit coloured light as they conduct the current along the tube. T

he light originates as electrons change energy levels in the ionized atoms. By coating the inside of the tube with a phosphor, invisible emitted radiation (such as ultraviolet light) can produce visible light; this is the principle of the fluorescent lamp. We will consider different kinds of RF discharges and their advantages and restrictions for lighting applications.

Definition

The Wireless Application Protocol Forum is an industry group dedicated to the goal of enabling sophisticated telephony and information services on handheld wireless devices. These devices include mobile telephones, pagers, personal digital assistants (PDAs) and other wireless terminals. Recognizing the value and utility of the World Wide Web architecture, the WAP forum has chosen to align its technology closely with the Internet and the Web. The WAP specification extends and leverages existing technologies, such as IP, HTTP, XML, SSL, URLs, scripting and other content formats. Ericsson, Motorola, Nokia and Unwired Planet founded the WAP Forum in June, 1997.
Since then, it has experienced impressive membership growth with members joining from the ranks of the world's premiere wireless service providers, handset manufacturers, infrastructure providers, and software developers. WAP Forum membership is open to all industry participants.



Goals of WAP Forum
The WAP Forum has the following goals:

To bring Internet content and advanced data services to Wireless phones and other wireless terminals.

To create a global wireless protocol specification that works across all wireless network technologies.

To enable the creation of content and applications that scale across a wide range of wireless bearer networks and device types.

To embrace and extend existing standards and technology wherever possible and appropriate.

It is also very important for the WAP Forum's specification in such a way that they complement existing standards. For example, the WAPV1.0 specification is designed to sit on top of existing bearer channel standards so that any bearer standard can be used with the WAP protocols to implement complete product solutions. When the WAP Forum identifies a new area of technology where a standard does not exist, or exists but needs modification for wireless, it works to submit its specifications to other industry standard groups.



WAP Protocol Stack
Any network is organized as a series of layers or levels where each level performs a specific function. The set of rules that governs the communication between the peer entities within a layer are called protocols. The layers and protocols together forms the Protocol Stack. The request from the mobile device is sent as a URL through the operator's network to the WAP gateway, which is the interface between

Definition

In a constantly changing industry, DVI is the next major attempt at an all-in-one, standardized, universal connector for audio/video applications. Featuring a modern design and backed by the biggest names in the electronic industry, DVI is set to finally unify all digital media components with a single cable, remote, and interface.
DVI is built with a 5 Gbps bandwidth limit, over twice that of HDTV (which runs at 2.2 Gbps), and is built forwards-compatible by offering unallocated pipeline for future technologies. The connectors are sliding contact (like FireWire and USB) instead of screw-on (like DVI), and are not nearly as bulky as most current video interfaces.

The screaming bandwidth of HDMI is structured around delivering the highest-quality digital video and audio throughout your entertainment center. Capable of all international frequencies and resolutions, the HDMI cable will replace all analog signals (i.e. S-Video, Component, Composite, and Coaxial), as well as HDTV digital signals (i.e. DVI, P&D, DFP), with absolutely no compromise in quality.
Additionally, HDMI is capable of carrying up to 8 channels of digital-audio, replacing the old analog connections (RCA, 3.5mm) as well as optical formats (SPDIF, Toslink).

VIDEO INTERFACES

Video Graphics Array (VGA) is an analog computer display standard first marketed in 1987 by IBM. While it has been obsolete for some time, it was the last graphical standard that the majority of manufacturers decided to follow, making it the lowest common denominator that all PC graphics hardware supports prior to a device-specific driver being loaded. For example, the Microsoft Windows splash screen appears while the machine is still operating in VGA mode, which is the reason that this screen always appears in reduced resolution and color depth.

The term VGA is often used to refer to a resolution of 640×480, regardless of the hardware that produces the picture. It may also refer to the 15-pin D-subminiature VGA connector which is still widely used to carry analog video signals of all resolutions.

VGA was officially superseded by IBM's XGA standard, but in reality it was superseded by numerous extensions to VGA made by clone manufacturers that came to be known as "Super VGA".

A Male DVI-I Plug
The DVI interface uses a digital protocol in which the desired brightness of pixels is transmitted as binary data. When the display is driven at its native resolution, all it has to do is read each number and apply that brightness to the appropriate pixel. In this way, each pixel in the output buffer of the source device corresponds directly to one pixel in the display device, whereas with an analog signal the appearance of each pixel may be affected by its adjacent pixels as well as by electrical noise and other forms of analog distortion.
Previous standards such as the analog VGA were designed for CRT-based devices and thus did not use discrete time. As the analog source transmits each horizontal line of the image, it varies its output voltage to represent the desired brightness. In a CRT device, this is used to vary the intensity of the scanning beam as it moves across the screen.


Definition

Extreme Programming (XP) is actually a deliberate and disciplined approach to software development. About six years old, it has already been proven at many companies of all different sizes and industries worldwide. XP is successful because it stresses customer satisfaction. The methodology is designed to deliver the software your customer needs when it is needed. XP empowers software developers to confidently respond to changing customer requirements, even late in the life cycle. This methodology also emphasizes teamwork. Managers, customers, and developers are all part of a team dedicated to delivering quality software. XP implements a simple, yet effective way to enable groupware style development.

XP improves a software project in four essential ways; communication, simplicity feedback, and courage. XP programmers communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. They deliver the system to the customers as early as possible and implement changes as suggested. With this foundation XP programmers are able to courageously respond to changing requirements and technology. XP is different. It is a lot like a jig saw puzzle. There are many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This is a significant departure from traditional software development methods and ushers in a change in the way we program.

If one or two developers have become bottlenecks because they own the core classes in the system and must make all the changes, then try collective code ownership. You will also need unit tests. Let everyone make changes to the core classes whenever they need to. You could continue this way until no problems are left. Then just add the remaining practices as you can. The first practice you add will seem easy. You are solving a large problem with a little extra effort. The second might seem easy too. But at some point between having a few XP rules and all of the XP rules it will take some persistence to make it work. Your problems will have been solved and your project is under control.

It might seem good to abandon the new methodology and go back to what is familiar and comfortable, but continuing does pay off in the end. Your development team will become much more efficient than you thought possible. At some point you will find that the XP rules no longer seem like rules at all. There is a synergy between the rules that is hard to understand until you have been fully immersed. This up hill climb is especially true with pair programming, but the pay off of this technique is very large. Also, unit tests will take time to collect, but unit tests are the foundation for many of the other XP practices so the pay off is very great.

XP projects are not quiet; there always seems to be someone talking about problems and solutions. People move about, asking each other questions and trading partners for programming. People spontaneously meet to solve tough problems, and then disperse again. Encourage this interaction, provide a meeting area and set up workspaces such that two people can easily work together. The entire work area should be open space to encourage team communication. The most obvious way to start extreme programming (XP) is with a new project. Start out collecting user stories and conducting spike solutions for things that seem risky. Spend only a few weeks doing this. Then schedule a release planning meeting. Invite customers, developers, and managers to create a schedule that everyone agrees on. Begin your iterative development with an iteration planning meeting. Now you're started.

Definition


In recent years, that is in past 5 years Linux has seen significant growth as a server operating system and has been successfully deployed as an enterprise for Web, file and print servicing. With the advent of Kernel Version 2.4, Linux has seen a tremendous boost in scalability and robustness which further makes it feasible to deploy even more demanding enterprise applications such as high end database, business intelligence software ,application servers, etc. As a result, whole enterprise business suites and middleware such as SAP, Websphere, Oracle, etc., are now available on Linux.

For these enterprise applications to run efficiently on Linux, or on any other operating system, the OS must provide the proper abstractions and services. Usually these enterprise applications and applications suites or software are increasingly built as multi process / multithreaded applications. These application suites are often a collection of multiple independent subsystems. Despite functional variations between these applications often they require to communicate with each other and also sometimes they need to share a common state. Examples of this are database systems, which typically maintain shared I/O buffers in user space.

Access to such shared state must be properly synchronized. Allowing multiple processes to access the same resources in a time sliced manner or potentially consecutively in the case of multiprocessor systems can cause many problems. This is due to the need to maintain data consistency, maintain true temporal dependencies and to ensure that each thread will properly release the resource as required when it has completed its action. Synchronization can be established through locks. There are mainly two types of locks: - Exclusive locks and shared locks. Exclusive locks are those which allows only a single user to access the protected entity, while shared locks are those which implements the multiple reader - single writer semantics. Synchronization implies a shared state, indicating that a particular resource is available or busy, and a means to wait for its availability. The latter one can either be accomplished through busy-waiting or through an explicit / implicit call to the scheduler.

CONCURRENCY IN LINUX OS

As different processes interact with each other they may often need to access and modify shared section of code, memory locations and data. The section of code belonging to a process or thread which manipulates a variable which is also being manipulated by another process or thread is commonly called critical section. Proper synchronization problems usually serialize the access over critical section. Processes operate within their own virtual address space and are protected by the operating system from interference by other processes. By default a user process cannot communicate with another process unless it makes use of secure, kernel managed mechanisms. There are many times when processes will need to share common resources or synchronize their actions. One possibility is to use threads, which by definition can share memory within a process.

This option is not always possible (or wise) due to the many disadvantages which can be experienced with threads. Methods of passing messages or data between processes are therefore required. In traditional UNIX systems the basic mechanisms for synchronization were System V IPC (inter process communication) such as semaphores, msgqueues, sockets and the file locking mechanisms such as flock() and fcntl() functions. Message queues (msgqueues) consist of a linked list within the kernel's addressing space. Messages are added to the queue sequentially and may be retrieved from the queue in several different ways. Semaphores are counters used to control access to shared resources by multiple processes.

Definition


Requirements for removable media storage devices (RMSDs) used with personal computers have changed significantly since the introduction of the floppy disk in 1971. At one time, desktop computers depended on floppy disks for all of their storage requirements. Even with the advent of multigigabyte hard drives, floppy disks and other RMSDs are still an integral part of most computer systems, providing.

¢ Transport between computers for data files and software
¢ Backup to preserve data from the hard dive
¢ A way to load the operating system software in the event of a hard failure.

Data storage devices currently come in a variety of different capacities, access time, data transfer rate and cost per Gigabyte. The best overall performance figures are currently achieved using hard disk drives (HDD), which can be integrated into RAID systems (reliable arrays of inexpensive drives) at costs of $10 per GByte (1999). Optical disc drives (ODD) and tapes can be configured in the form of jukeboxes and tape libraries, with cost of a few dollars per GByte for the removable media. However, the complex mechanical library mechanism serves to limit data access time to several seconds and affects the reliability adversely.

Most information is still stored in non-electronic form, with very slow access and excessive costs (e.g., text on paper, at a cost of $10 000 per GByte).Some RMSD options available today are approaching the performance, capacity, and cost of hard-disk drives. Considerations for selecting an RMSD include capacity, speed, convenience, durability, data availability, and backward-compatibility. Technology options used to read and write data include.

¢ Magnetic formats that use magnetic particles and magnetic fields.
¢ Optical formats that use laser light and optical sensors.
¢ Magneto-optical and magneto-optical hybrids that use a combination of magnetic and optical properties to increase storage capacity.

The introduction of the Fluorescent Multi-layer Disc (FMD) smashes the barriers of existing data storage formats. Depending on the application and the market requirements, the first generation of 120mm (CD Sized) FMD ROM discs will hold 20 - 100 GigaBytes of pre -recorded data on 12 - 30 data layers with a total thickness of under 2mm.In comparison, a standard DVD disc holds just 4.7 gigabytes. With C3D's (Constellation 3D) proprietary parallel reading and writing technology, data transfer speeds can exceed 1 gigabit per second, again depending on the application and market need.

WHY FMD?

Increased Disc Capacity
DVD data density (4.7 GB) on each layer of data carriers up to 100 layers. Initially, the FMD disc will hold anywhere from 25 - 140 GB of data depending on market need. Eventually a terabyte of data on a single disc will be achievable.

Quick Parallel Access and Retrieval of Information
Reading from several layers at a ime and multiple tracks at a time nearly impossible using the reflective technology of a CD/DVD - is easily achieved in FMD. This will allow for retrieval speeds of up to 1 gigabyte per second.

Media Tolerances
By using incoherent light to read data the FMD/FMC media will have far fewer restrictions in temperature range, vibration and air- cleanness during manufacturing. And will provide a considerably more robust data carrier than existing CD and DVDs.

Definition

Gaming consoles have proved themselves to be the best in digital entertainment. Gaming consoles were designed for the sole purpose of playing electronic games and nothing else. A gaming console is a highly specialised piece of hardware that has rapidly evolved since its inception incorporating all the latest advancements in processor technology, memory, graphics, and sound among others to give the gamer the ultimate gaming experience.

WHY GAMING IS SO IMPORTANT TO THE COMPUTER INDUSTRY

Research conducted in 2002 show that 60% of US residents aged six and above play computer games. Over 221 million computer and video games were sold in the U.S. Earlier research found that 35% of U.S. residents surveyed said that video games were the most entertaining media activity while television came in a distant second at 18%. The U.S. gaming industry reported sales of over $ 6.5 billion in the fiscal year 2002-03. Datamonitor estimates that online gaming revenues will reach $ 2.9 billion by 2005. Additional research has found that 90% of U.S. households with children has rented or owned a computer or video game and that U.S. children spend an average of 20 minutes a day playing video games. Research conducted by Pew Internet and American Life Project showed that 66% of American teenagers play or download games online. While 57% of girls play online, 75% of boys reported to having played internet games. This has great impact on influencing online game content and multiplayer capability on websites.

The global computer and video game industry, generating revenue of over 20 billion U.S. dollars a year, forms a major part of the entertainment industry. The sales of major games are counted in millions (and these are for software units that often cost 30 to 50 UK pounds each), meaning that total revenues often match or exceed cinema movie revenues. Game playing is widespread; surveys collated by organisations such as the Interactive Digital Software Association indicate that up to 60 per cent of people in developed countries routinely play computer or video games, with an average player age in the mid to late twenties, and only a narrow majority being male. Add on those who play the occasional game of Solitaire or Minesweeper on the PC at work, and one observes a phenomenon more common than buying a newspaper, owning a pet, or going on holiday abroad.

Why are games so popular? The answer to this question is to be found in real life. Essentially, most people spend much of their time playing games of some kind or another like making it through traffic lights before they turn red, attempting to catch the train or bus before it leaves, completing the crossword, or answering the questions correctly on Who Wants To Be A Millionaire before the contestants. Office politics forms a continuous, real-life strategy game which many people play, whether they want to or not, with player-definable goals such as 'increase salary to next level', 'become the boss', 'score points off a rival colleague and beat them to that promotion' or 'get a better job elsewhere'. Gaming philosophers who frequent some of the many game-related online forums periodically compare aspects of gaming to real life-with the key difference being that when "Game Over" is reached in real life, there is no restart option.

Definition

In a constantly changing industry, HDMI is the next major attempt at an all-in-one, standardized, universal connector for audio/video applications. Featuring a modern design and backed by the biggest names in the electronic industry, HDMI is set to finally unify all digital media components with a single cable, remote, and interface.

HDMI is built with a 5 Gbps bandwidth limit, over twice that of HDTV (which runs at 2.2 Gbps), and is built forwards-compatible by offering unallocated pipeline for future technologies. The connectors are sliding contact (like FireWire and USB) instead of screw-on (like DVI), and are not nearly as bulky as most current video interfaces.
HDMI 1.3 further increases the bandwith limit to 10.2 Gbps, to allow for the video and audio improvements of the upgraded standard.

The screaming bandwidth of HDMI is structured around delivering the highest-quality digital video and audio throughout your entertainment center. Capable of all international frequencies and resolutions, the HDMI cable will replace all analog signals (i.e. S-Video, Component, Composite, and Coaxial), as well as HDTV digital signals (i.e. DVI, P&D, DFP), with absolutely no compromise in quality.

Additionally, HDMI is capable of carrying up to 8 channels of digital-audio, replacing the old analog connections (RCA, 3.5mm) as well as optical formats (SPDIF, Toslink).

The HDMI Founders include leading consumer electronics manufacturers Hitachi, Matsushita Electric Industrial (Panasonic), Philips, Sony, Thomson (RCA), Toshiba, and Silicon Image. Digital Content Protection, LLC (a subsidiary of Intel) is providing High-bandwidth Digital Content Protection (HDCP) for HDMI. In addition, HDMI has the support of major motion picture producers Fox and Universal, and system operators DirecTV, EchoStar (Dish Network) as well as CableLabs.

HDMI and HDCP are two distinctly separate standards, owned by separate governing entities. The HDMI Working Group is comprised of seven founding companies: Hitachi, Matsushita (best known for the Panasonic brand), Philips, Silicon Image, Sony, Thomson (known for RCA branded products) and Toshiba. These companies worked together to develop the HDMI specification, which is currently at version 1.1. The HDMI Licensing LLC administers HDMI licenses and the mandatory compliance testing associated for HDMI.

Definition

The explosion of java over the last year has been driven largely by its in role in bringing a new generation of interactive web pages to World Wide Web. Undoubtedly various features of the languages-compactness, byte code portability, security, and so on-make it particularly attractive as an implementation languages for applets embedded in web pages. But it is clear that the ambition of the Java development team go well beyond enhancing the functionality of HTML documents.

"Java is designed to meet the challenges of application development on the context of heterogeneous, network-wide distributed environments. Paramount among these chalanges is secure delivery of applications that consume the minimum of systems resources, can run on any hardware and software platform, can be extended dynamically."

Several of these concerns are mirrored in developments in the High Performance Computing world over a number of years. A decade ago the focus of interest in the parallel computing community was on parallel hardware. A parallel computer was typically built from specialized processors through a proprietary high-performance communication switch. If the machine also had to be programmed in a proprietary language, that was an acceptable price for the benefits of using a supercomputer. This attitude was not sustainable as one parallel architecture gave way to another, and cost of porting software became exorbitant. For several years now, portability across platforms had been a central concern in parallel computing.

HPJava is a programming language extended from Java to support parallel programming, especially (but not exclusively) data parallel programming on message passing and distributed memory systems, from multi-processor systems to workstation clusters.

Although it has a close relationship with HPF, the design of HPJava does not inherit the HPF programming model. Instead the language introduces a high-level structured SPMD programming style--the HPspmd model. A program written in this kind of language explicitly coordinates well-defined process groups. These cooperate in a loosely synchronous manner, sharing logical threads of control. As in a conventional distributed-memory SPMD program, only a process owning a data item such as an array element is allowed to access the item directly. The language provides special constructs that allow programmers to meet this constraint conveniently.

Besides the normal variables of the sequential base language, the language model introduces classes of global variables that are stored collectively across process groups. Primarily, these are distributed arrays. They provide a global name space in the form of globally subscripted arrays, with assorted distribution patterns. This helps to relieve programmers of error-prone activities such as the local-to-global, global-to-local subscript translations which occur in data parallel applications.

In addition to special data types the language provides special constructs to facilitate both data parallel and task parallel programming. Through these constructs, different processors can either work simultaneously on globally addressed data, or independently execute complex procedures on locally held data. The conversion between these phases is seamless.

Definition

The Human-Computer Interface (HCI) deals with the methods by which computers and their users communicate. It is the process of designing interface software so that computers are pleasant, easy to use and do what people want them to do. Dealing with HCI requires the study of not only the hardware of the computer, but that of the human side also. Therefore attention must be paid to human psychology and physiology.

This is because to build a better two-way communication, one must know the capabilities and limitation of both sides. This seminar also deals with concepts and guidelines that should be followed in order to produce a good HCI. Specifically dealt with topics include Dialogue Design, Presentation Design, General Input and Output.

HUMAN PSYCHOLOGY & PHYSIOLOGY

This section mainly deals with the way humans communicate.

The human brain is where all the cognitive functions take place. It is ultimately where a human receives, interprets and stores information. Information can be processed by the sense organs and sent to the brain faster and more precise than the brain can handle. Many models have been developed in order to try and use a computer analogy to brain functions but with mixed success. They are however quite useful because they present to us a model with which we can illustrate capabilities and limitations.

These models suggest that there are two forms of human memory: short term and long term. Each sense appears to have its own short-term memory, which acts like a buffer or staging area for input from the particular sense organ to the brain. Any memory that is not reinforced and moved to long-term memory is forgotten. Short term memory has a capacity of about seven blocks of information but this too seems to be able to be increased with practice and added levels of abstraction and association.

In order for information to be remembered it must be moved into long-term memory. This can be a conscious act as in deliberately memorizing something through repetition or unconscious as when a particularly interesting piece of data is retrieved and requires more thought. No maximum size of long-term memory has yet been determined. This aspect of memory and the fact that the human brain can only process so much information is important to the layout of an HCI. People sometimes describe a particular screen as "too busy". What this means is that there is too much information on the screen at once. The brain is incapable of taking in so much information at once and ambiguity and confusion results. Precision should be a primary concern for the HCI designer.

Any major industry's success depends invariably on the location of its bases, production centers and warehouses. Thus locating the sites before establishing these units is done by facility location and planning unit of the industry. For greater profits the facilities should be located at an optimum distance from the market , raw material procurement sites utilities like water , sand etc. For these problems involving layout a number of algorithms are in use like ALDEP,CORELAP, CRAFT etc. But since the location of facilities have become very complex due to greater constraints these days a determined search of a good algorithm begins. This can be achieved by using GENETIC ALGORITHMS. This type of evolutionary algorithms have made the computational effort fast and accurate.

Introduction
Material handling and layout related costs have been estimated to be about 20%-50% of the total operating expenses in manufacturing. To stay competitive in the market these high overhead costs have to be reduced considerably. One way of doing this is to develop an efficient facility layout. The secondary benefit of doing so is in reducing the large Work-In-Process inventory and justifying the costly long-term investment. Developing an efficient layout is primarily finding the most efficient arrangement of n facilities in m locations (m >= n).

Traditionally the layout problem has been presented as a Quadratic Assignment problem (QAP). The layout problem can also be termed as one-dimensional or two-dimensional problem corresponding to the single-row or multi-row patterns of layout. It is well known that QAP is NP-complete category due to the combinatorial function involved and cannot be solved for large layout problems. An alternative model for the QAP that consists of absolute values in the objective function and constraints that can be used for continuous formulations instead of discrete. The efficiency of these models however depends upon the efficient integer programming algorithms.

PROBLEM FORMULATION
The facility layout problem has been termed as Quadratic Assignment Problem (QAP) because the objective function is a second-degree polynomial function of the variables, and the constraints are identical to the constraints of the assignment problem. The objective of the QAP is to find the optimal assignment of n facilities to n sites in order to minimize the material handling cost expressed as the product of workflow and the travel distance. The QAP can be formally stated as where wij is the workflow between the facilities i and j, a(i) denotes the location to which i has been assigned. The distance function d is anyone of the lp distance between the facilities i and j and is defined as where (xi, yj )and (xi, yj ) are the geometric centers for the locations a(i) and a(j) . If p = 1 the distances are the rectilinear distances whereas when p = 2 the distances are Euclidean. Each position can be occupied by only one facility and no facilities overlap each other. The Algorithms are based on those aforementioned statements and assumptions taking care of both the Rectilinear and the Euclidean distances while minimizing the objective.

Definition

An automotive controller that complements the driving experience must work to avoid collisions, enforce a smooth trajectory, and deliver the vehicle to the intended destination as quickly as possible. Unfortunately, satisfying
these requirements with traditional methods proves intractable at best and forces us to consider biologically -inspired techniques like Swarm Intelligence.

A controller is currently being designed in a robot simulation program with the goal of implementing the system in real hardware to investigate these biologically-inspired techniques and to validate the results. In this paper I present an idea that can be implemented in traffic safety by the application of Robotics & Computer Vision through Swarm Intelligence.

We stand today at the culmination of the industrial revolution. For the last four centuries, rapid advances in science have fueled industrial society. In the twentieth century, industrialization found perhaps its greatest expression
in Henry Ford's assembly line. Mass production affects almost every facet of modern life. Our food is mass produced in meat plants, commercial bakeries, and canaries.

Our clothing is shipped by the ton from factories in China and Taiwan. Certainly all the amenities of our lives - our stereos, TVs, and microwave ovens - roll off assembly lines by the truck load. Today, we're presented with another solution, that hopefully will fare better than its predecessors. It goes by the name of post-industrialism, and is commonly associated with our computer technology with Robots and Artificial Intelligence.

Robots are today where computers were 25 years ago. They're huge, hulking machines that sit on factory floors, consume massive resources and can only be afforded by large corporations and governments. Then came the PC revolution of the 1980s, when computers came out of the basements and landed on the desktops. So we're on the verge of a "PR" revolution today - a Personal Robotics revolution, which will bring the robots off the factory floor and put them in our homes, on our desktops and inside our vehicles.

Definition

TErrestrial Trunked RAdio (TETRA) standard was designed to meet some common requirements and objectives of the PMR and PAMR market alike. One of the last strong holds of analog technology in a digital world has been the area of trunked mobile radio. Although digital cellular technology has made great strides with broad support from a relatively large number of manufactures, digital trunked mobile radio systems for the Private Mobile Radio (PMR) and Public Access Mobile Radio (PAMR) market have lagged behind. Few manufacture currently offer digital systems, all of which are based on proprietary technology. However, the transition to digital is gaining momentum with the emergence of an open standard TETRA

TETRA is a Digital PMR Standard developed by ETSI. It is an open standard offers interoperability of equipment and networks from different manufacturers. It is potential replacement for analog and proprietary digital systems. Standard originated in1989 as Mobile Digital Trunked Radio System (MDTRS), later renamed to Trans European Trunked Radio, and is called TETRA since 1997.

TErrestrial Trunked Radio TETRA is the agreed standard for a new generation of digital land mobile radio communications designed to meet the needs of the most demanding Professional Mobile Radio networks (PMR) and Public Access Radio (PAMR) users. TETRA is the only existing digital PMR standard defined by the European Telecommunications Standard Institute (ETSI).

Among the standard's many features are voice and extensive data communications services. Networks based on the TETRA standard will provide cost-effective, spectrum-efficient and secure communications with advance capabilities for the mobile and fixed elements of companies and organizations.

As a standard, TETRA should be regarded as complementary to GSM and DECT. In comparison with GSM as currently implemented, TETRA provides faster call set-up, higher data rates, group calls and direct mode. TETRA manufactures have been developing their products for ten years. The investments have resulted in highly sophisticated products. A number of important orders have already been placed. According to estimates, TETRA-based networks will have 5-10 million users by the year 2010.

Definition

Wireless transmission of electromagnetic radiation (communication signals) has become a popular method of transmitting RF signals such as cordless, wireless and cellular telephone signals, paper signals, two way radio signals,video conferencing signals and LAN signals indoors.

Indoor wireless transmission has the advantage that building in which transmission is taking place does not have to be filled with wires or cables that are equipped to carry a multitude of signals. Wires and signals are costly to install and may require expensive upgrades when their capacity is exceeded or when new technologies require different types of wires and cables than those already installed.

Traditional indoor wireless communication systems transmit and receive signals through the use of a network of transmitters, receivers and antennas that are placed through out the interior of a building. Devices must be located such that signals must not be lost or signal strength may not get attenuated. Again a change in the existing architecture also affects the wireless transmission. Another challenge related to installation of wireless networks in buildings is the need to predict the RF propagation and coverage in the presence of complex combinations of shapes and materials in the buildings.

In general, the attenuation in buildings is larger than that in free space, requiring more cells and higher power to obtain wider coverage. Despite of all these, placement of antennas, receivers and antennas in an indoor environment is largely a process of trial and error. Hence there is need for a method and a system for efficiently transmitting RF and microwave signals indoors without having to install an extensive system of wires and cables inside the buildings.

This paper suggests an alternative method of distributing electromagnetic signals in buildings by the recognition that every building is equipped with an RF wave guide distribution system, the HVAC ducts. The use of HVAC ducts is also amenable to a systematic design procedure but should be significantly less expensive than other approaches since existing infrastructure is used and RF is distributed more efficiently.

Definition

"Money in the 21st century will surely prove to be as different from the money of the current century as our money is from that of the previous century. Just as fiat money replaced specie-backed paper currencies, electronically initiated debits and credits will become the dominant payment modes, creating the potential for private money to compete
with government-issued currencies." Just as every thing is getting under the shadow of "e" today we have paper currency being replaced by electronic money or e-cash.

Hardly a day goes by without some mention in the financial press of new developments in "electronic money". In the emerging field of electronic commerce, novel buzzwords like smartcards, online banking, digital cash, and electronic checks are being used to discuss money. But how are these brand-new forms of payment secure? And most importantly, which of these emerging secure electronic money technologies will survive into the next century?

These are some of the tough questions to answer but here's a solution, which provides a form of security to these modes of currency exchange using the "Biometrics Technology". The Money Pad introduced here uses the biometrics technology for Finger Print recognition. Money Pad is a form of credit card or smartcard, which we name so.

Every time the user wants to access the Money Pad he has to make an impression of his fingers which will be scanned and matched with the one in the hard disk of data base server. If the finger print matches with the user's he will be allowed to access and use the Pad other wise the Money Pad is not accessible. Thus providing a form of security to the ever-lasting transaction currency of the future "e-cash".

Money Pad - A form of credit card or smart card similar to floppy disk, which is
introduced to provide, secure e-cash transactions.


Definition

The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This document aims to summarize not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the paper begins with an introduction to classical information theory .The principles of quantum mechanics are then outlined.

The EPR-Bell correlation and quantum entanglement in general, form the essential new ingredient, which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, the universal quantum computer and quantum algorithms. The common theme of all these ideas is the use of quantum entanglement as a computational resource.

Experimental methods for small quantum processors are briefly sketched, concentrating on ion traps, super conducting cavities, Nuclear magnetic resonance imaging based techniques, and quantum dots. "Where a calculator on the Eniac is equipped with 18000 vacuum tubes and weighs 30 tons, computers in the future may have only 1000 tubes and weigh only 1 1/2 tons" Popular Mechanics, March 1949.

Now, if this seems like a joke, wait a second. "Tomorrows computer might well resemble a jug of water"
This for sure is no joke. Quantum computing is here. What was science fiction two decades back is a reality today and is the future of computing. The history of computer technology has involved a sequence of changes from one type of physical realization to another --- from gears to relays to valves to transistors to integrated circuits and so on. Quantum computing is the next logical advancement.

Today's advanced lithographic techniques can squeeze fraction of micron wide logic gates and wires onto the surface of silicon chips. Soon they will yield even smaller parts and inevitably reach a point where logic gates are so small that they are made out of only a handful of atoms. On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now.

Quantum technology can offer much more than cramming more and more bits to silicon and multiplying the clock-speed of microprocessors. It can support entirely new kind of computation with qualitatively new algorithms based on quantum principles!

Definition

In this seminar ,is giving some basic concepts about smart cards. The physical and logical structure of the smart card and the corresponding security access control have been discussed in this seminar . It is believed that smart cards offer more security and confidentiality than the other kinds of information or transaction storage. Moreover, applications applied with smart card technologies are illustrated which demonstrate smart card is one of the best solutions to provide and enhance their system with security and integrity.

The seminar also covers the contactless type smart card briefly. Different kinds of scheme to organise and access of multiple application smart card are discussed. The first and second schemes are practical and workable on these days, and there is real applications developed using those models. For the third one, multiple independent applications in a single card, there is still a long way to go to make it becomes feasible because of several reasons.

At the end of the paper, an overview of the attack techniques on the smart card is discussed as well. Having those attacks does not mean that smart card is unsecure. It is important to realise that attacks against any secure systems are nothing new or unique. Any systems or technologies claiming 100% secure are irresponsible. The main consideration of determining whether a system is secure or not depends on whether the level of security can meet the requirement of the system.

The smart card is one of the latest additions to the world of information technology. Similar in size to today's plastic payment card, the smart card has a microprocessor or memory chip embedded in it that, when coupled with a reader, has the processing power to serve many different applications. As an access-control device, smart cards make personal and business data available only to the appropriate users. Another application provides users with the ability to make a purchase or exchange value. Smart cards provide data portability, security and convenience. Smart cards come in two varieties: memory and microprocessor.

Memory cards simply store data and can be viewed as a small floppy disk with optional security. A microprocessor card, on the other hand, can add, delete and manipulate information in its memory on the card. Similar to a miniature computer, a microprocessor card has an input/output port operating system and hard disk with built-in security features. On a fundamental level, microprocessor cards are similar to desktop computers. They have operating systems, they store data and applications, they compute and process information and they can be protected with sophisticated security tools. The self-containment of smart card makes it resistant to attack as it does not need to depend upon potentially vulnerable external resources. Because of this characteristic, smart cards are often used in different applications, which require strong security protection and authentication.

Definition

Have you ever thought of a buffer overflow attack ? It occurs through careless programming and due to patchy nature of the programs. Many C programs have buffer overflow vulnerabilities because the C language lacks array bounds checking, and the culture of C programmers encourages a performance-oriented style that avoids error checking where possible. Eg: gets and strcpy ( no bounds checking ). This paper presents a systematic solution to the persistent problem of buffer overflow attacks. Buffer overflow attack gained notoriety in 1988 as part of the Morris Worm
incident on the Internet. These problems are probably the result of careless programming, and could be corrected
by elementary testing or code reviews along the way.

THE ATTACK :- A (malicious) user finds the vulnerability in a highly privileged program and someone else implements a patch to that particular attack, on that privileged program. Fixes to buffer overflow attacks attempt to solve the problem at the source (the vulnerable program) instead of at the destination (the stack that is being overflowed).

StackGuard :- It is a simple compiler extension that limits the amount of damage that a buffer overflow attack can inflict on a program. The paper discusses the various intricacies to the problem and the implementation details of the Compiler extension 'Stack Guard '.

Stack Smashing Attack :- Buffer overflow attacks exploit a lack of bounds checking on the size of input being stored in a buffer array. The most common data structure to corrupt in this fashion is the stack, called a ``stack smashing attack'' .

StackGuard For Network Access :- The paper also discusses the impacts on network access to the 'Buffer Overflow Attack'.

StackGuard prevents changes to active return addresses by either :-
1. Detecting the change of the return address before the function returns, or
2. Completely preventing the write to the return address. MemGuard is a tool developed
to help debug optimistic specializations by locating code statements that change quasi-invariant
values.

STACKGUARD OVERHEAD
" Canary StackGuard Overhead
" MemGuard StackGuard Overhead
" StackGuard Macrobenchmarks

The paper presents the issues and their implications on the 'IT APPLICATIONS' and discusses the solutions through implementation details of 'Stack Guard'.

Definition

The field of surgery is entering a time of great change, spurred on by remarkable recent advances in surgical and computer technology. Computer-controlled diagnostic instruments have been used in the operating room for years to help provide vital information through ultrasound, computer-aided tomography (CAT), and other imaging technologies. Only recently have robotic systems made their way into the operating room as dexterity-enhancing surgical assistants and surgical planners, in answer to surgeons' demands for ways to overcome the surgical limitations of minimally invasive laparoscopic surgery.

The Robotic surgical system enables surgeons to remove gallbladders and perform other general surgical procedures while seated at a computer console and 3-D video imaging system acrossthe room from the patient. The surgeons operate controls with their hands and fingers to direct a robotically controlled laparoscope. At the end of the laparoscope are advanced, articulating surgical instruments and miniature cameras that allow surgeons to peer into the body and perform the procedures.

Now Imagine : An army ranger is riddled with shrapnel deep behind enemy lines. Diagnostics from wearable sensors signal a physician at a nearby mobile army surgical hospital that his services are needed urgently. The ranger is loaded into an armored vehicle outfitted with a robotic surgery system. Within minutes, he is undergoing surgery performed by the physician, who is seated at a control console 100 kilometers out of harm's way.

The patient is saved. This is the power that the amalgamation of technology and surgical sciences are offering Doctors.
Just as computers revolutionized the latter half of the 20th century, the field of robotics has the potential to equally alter how we live in the 21st century. We've already seen how robots have changed the manufacturing of cars and other consumer goods by streamlining and speeding up the assembly line.

We even have robotic lawn mowers and robotic pets now. And robots have enabled us to see places that humans are not yet able to visit, such as other planets and the depths of the ocean. In the coming decades, we will see robots that have artificial intelligence,coming to resemble the humans that create them. They will eventually become self-aware and conscious, and be able to do anything that a human can. When we talk about robots doing the tasks of humans, we often talk about the future, but the future of Robotic surgery is already here.

Definition:



Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].

Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.

Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.

AUTOSYNCHRONIZED SCANNER

The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene.

Definition

Light emitting polymers or polymer based light emitting diodes discovered by Friend et al in 1990 has been found superior than other displays like, liquid crystal displays (LCDs) vacuum fluorescence displays and electro luminescence displays. Though not commercialised yet, these have proved to be a mile stone in the filed of flat panel displays. Research in LEP is underway in Cambridge Display Technology Ltd (CDT), the UK.

In the last decade, several other display contenders such as plasma and field emission displays were hailed as the solution to the pervasive display. Like LCD they suited certain niche applications, but failed to meet broad demands of the computer industry.

Today the trend is towards the non_crt flat panel displays. As LEDs are inexpensive devices these can be extremely handy in constructing flat panel displays. The idea was to combine the characteristics of a CRT with the performance of an LCD and added design benefits of formability and low power. Cambridge Display Technology Ltd is developing a display medium with exactly these characteristics.

The technology uses a light-emitting polymer (LEP) that costs much less to manufacture and run than CRTs because the active material used is plastic.

LEP is a polymer that emits light when a voltage is applied to it. The structure comprises a thin film semi conducting polymer sandwiched between two electrodes namely anode and cathode. When electrons and holes are injected from the electrodes, the recombination of these charge carriers takes place, which leads to emission of light that escape through glass substrate.

Definition

We all have our favorite radio stations that we preset into our car radios, flipping between them as we drive to and from work, on errands and around town. But when travel too far away from the source station, the signal breaks up and fades into static. Most radio signals can only travel about 30 or 40 miles from their source. On long trips that find you passing through different cities, you might have to change radio stations every hour or so as the signals fade in and out.

Now, imagine a radio station that can broadcast its signal from more than 22,000 miles (35,000 kill) away and then come through on your car radio with complete clarity without ever having to change the radio station.

Satellite Radio or Digital Audio Radio Service (DARS) is a subscriber based radio service that is broadcast directly from satellites. Subscribers will be able to receive up to100 radio channels featuring Compact Disk digital quality music, news, weather, sports. talk radio and other entertainment channels.

Satellite radio is an idea nearly 10 years in the making. In 1992, the U.S. Federal Communications Commission (FCC) allocated a spectrum in the "S" band (2.3 GHz) for nationwide broadcasting of satellite-based Digital Audio Radio Service (DARS).. In 1997. the FCC awarded 8-year radio broadcast licenses to two companies, Sirius Satellite Radio former (CD Radio) and XM Satellite Radio (former American Mobile Radio). Both companies have been working aggressively to be prepared to offer their radio services to the public by the end of 2000. It is expected that automotive radios would be the largest application of Satellite Radio.

The satellite era began in September 2001 when XM launched in selected markets. followed by full nationwide service in November. Sirius lagged slightly, with a gradual rollout beginning _n February, including a quiet launch in the Bay Area on June 15. The nationwide launch comes July 1.

Definition

Emission Computed Tomography is a technique where by multi cross sectional images of tissue function can be produced , thus removing the effect of overlying and underlying activity. The technique of ECT is generally considered as two separate modalities. SINGLE PHOTON Emission Computed Tomography involves the use single gamma ray emitted per nuclear disintegration. Positron Emission Tomography makes use of radio isotopes such as gallium-68, when two gamma rays each of 511KeV, are emitted simultaneously where a positron from a nuclear disintegration annihilates in tissue.

SPECT, the acronym of Single Photon Emission Computed Tomography is a nuclear medicine technique that uses radiopharmaceuticals, a rotating camera and a computer to produce images which allow us to visualize functional information about a patient's specific organ or body system. SPECT images are functional in nature rather than being purely anatomical such as ultrasound, CT and MRI. SPECT, like PET acquires information on the concentration of radio nuclides to the patient's body.

SPECT dates from the early 1960 are when the idea of emission traverse section tomography was introduced by D.E.Kuhl and R.Q.Edwards prior to PET, X-ray, CT or MRI. THE first commercial Single Photon- ECT or SPECT imaging device was developed by Edward and Kuhl and they produce tomographic images from emission data in 1963. Many research systems which became clinical standards were also developed in 1980's.

SPECT is short for single photon emission computed tomography. As its name suggests (single photon emission) gamma rays are the sources of the information rather than X-ray emission in the conventional CT scan.

Similar to X-ray, CT, MRI, etc SPECT allows us to visualize functional information about patient's specific organ or body system.

Internal radiation is administrated by means of a pharmaceutical which is labeled with a radioactive isotope. This pharmaceutical isotope decays, resulting in the emission of gamma rays. These gamma rays give us a picture of what's happening inside the patient's body.

By using the most essential tool in Nuclear Medicine-the Gamma Camera. The Gamma Camera can be used in planner imaging to acquire a 2-D image or in SPECT imaging to acquire a 3-D imag

Definition

With the proliferation of portable electronic devices, power efficient data transmission has become increasingly important. For serial data transfer, universal asynchronous receiver / transmitter (UART) circuits are often implemented because of their inherent design simplicity and application specific versatility. Components such as laptop keyboards, palm pilot organizers and modems are few examples of devices that employ UART circuits. In this work, design and analysis of a robust UART architecture has been carried out to minimize power consumption during both idle and continuous modes of operation.

UART

An UART (universal asynchronous receiver / transmitter) is responsible for performing the main task in serial communications with computers. The device changes incoming parallel information to serial data which can be sent on a communication line. A second UART can be used to receive the information. The UART performs all the tasks, timing, parity checking, etc. needed for the communication. The only extra devices attached are line driver chips capable of transforming the TTL level signals to line voltages and vice versa.

To use the device in different environments, registers are accessible to set or review the communication parameters. Setable parameters are for example the communication speed, the type of parity check, and the way incoming information is signaled to the running software.

UART types

Serial communication on PC compatibles started with the 8250 UART in the XT. In the years after, new family members were introduced like the 8250A and 8250B revisions and the 16450. The last one was first implemented in the AT. The higher bus speed in this computer could not be reached by the 8250 series. The differences between these first UART series were rather minor. The most important property changed with each new release was the maximum allowed speed at the processor bus side.

The 16450 was capable of handling a communication speed of 38.4 kbs without problems. The demand for higher speeds led to the development of newer series which would be able to release the main processor from some of its tasks. The main problem with the original series was the need to perform a software action for each single byte to transmit or receive. To overcome this problem, the 16550 was released which contained two on-board FIFO buffers, each capable of storing 16 bytes. One buffer for incoming, and one buffer for outgoing bytes.


Definition

Computer chips of today are synchronous. They contain a main clock, which controls the timing of the entire chips. There are problems, however, involved with these clocked designs that are common today.One problem is speed. A chip can only work as fast as its slowest component. Therefore, if one part of the chip is especially slow, the other parts of the chip are forced to sit idle. This wasted computed time is obviously detrimental to the speed of the chip.

New problems with speeding up a clocked chip are just around the corner. Clock frequencies are getting so fast that signals can barely cross the chip in one clock cycle. When we get to the point where the clock cannot drive the entire chip, we'll be forced to come up with a solution. One possible solution is a second clock, but this will incur overhead and power consumption, so this is a poor solution. It is also important to note that doubling the frequency of the clock does not double the chip speed, therefore blindly trying to increase chip speed by increasing frequency without considering other options is foolish.

The other major problem with c clocked design is power consumption. The clock consumes more power that any other component of the chip. The most disturbing thing about this is that the clock serves no direct computational use. A clock does not perform operations on information; it simply orchestrates the computational parts of the computer.

New problems with power consumption are arising. As the number of transistors on a chi increases, so does the power used by the clock. Therefore, as we design more complicated chips, power consumption becomes an even more crucial topic. Mobile electronics are the target for many chips.

These chips need to be even more conservative with power consumption in order to have a reasonable battery lifetime.The natural solution to the above problems, as you may have guessed, is to eliminate the source of these headaches: the clock.

Definition

According to the dictionary guidance is the 'process of guiding the path of an object towards a given point, which in general may be moving'. The process of guidance is based on the position and velocity if the target relative to the guided object. The present day ballistic missiles are all guided using the global positioning system or GPS.GPS uses satellites as instruments for sending signals to the missile during flight and to guide it to the target.

SATRACK is a system that was developed to provide an evaluation methodology for the guidance system of the ballistic missiles. This was developed as a comprehensive test and evaluation program to validate the integrated weapons system design for nuclear powered submarines launched ballistic missiles.this is based on the tracking signals received at the missile from the GPS satellites. SATRACK has the ability to receive record, rebroadcast and track the satellite signals.

SATRACK facility also has the great advantage that the whole data obtained from the test flights can be used to obtain a guidance error model. The recorded data along with the simulation data from the models can produce a comprehensive guidance error model. This will result in the solution that is the best flight path for the missile.

The signals for the GPS satellite navigation are two L-band frequency signals. They can be called L1 and L2.L1 is at 1575.42 MHz and L2 at 1227.60 MHz.The modulations used for these GPS signals are

1. Narrow band clear/acquisition code with 2MHz bandwidth.
2. Wide band encrypted P code with 20MHz bandwidth.

L1 is modulated using the narrow band C/A code only. This signal will give an accuracy of close to a 100m only. L2 is modulated using the P code. This code gives a higher accuracy close to 10m that is why they are encrypted. The parameters that a GPS signal carries are latitude, longitude, altitude and time. The modulations applied to each frequency provide the basis for epoch measurements used to determine the distances to each satellite. Tracking of the dual frequency GPS signals provides a way to correct measurements from the effect of refraction through the ionosphere. An alternate frequency L3 at 1381.05MHz was also used to compensate for the ionospheric effects.

Definition

The World Wide Web's current implementation is designed predominantly for information retrieval and display in a human readable form. Its data formats and protocols are neither intended nor suitable for machine-to-machine interaction without humans in the loop. Emergent Internet uses - including peer- to- peer and grid computing - provide both a glimpse and impetus for evolving the Internet into a distributed computing platform.

What would be needed to make the Internet into a application-hosting platform. This would be a networked, distributed counterpart of the hosting environment that traditional operating system provides to application in a single node. Creating this platform requires additional functional layer to the Internet that can allocate and manage resources necessary for application execution.

Given such a hosting environment, software designers could create network application without having to know at design time the type or the number of nodes the application will execute on. With proper support, the system could allocate and bind software components to the resources they require at runtime, based on resource requirement, availability, connectivity and system state at actual time of execution. In contrast, early bindings tend to result in static allocations that cannot adapt well to resource, load and availability variations, thus the software components tend to be less efficient and have difficulty recovering from failures. The foundation of proposed approach is to disaggregate and virtualize.

System resources as services that can be described, discovered and dynamically configured at runtime to execute a application. Such a system can be built as a combination and extension of Web services, peer-to-peer computing, and grid computing standards and technologies, It thus follows the successful internet model of adding minimal and relatively simple functional layers to meet requirements while atop already available technologies.

But it does not advocate an "Internet OS" approach that would provide some form of uniform or centralized global-resources management. Several theoretical and practical reasons makes such an approach undesirable, including its inability to scale and the need to provide and manage supporting software on every participating platform. Instead, we advocate a mechanism that supports spontaneous, dynamic, and voluntary collaboration among entities with their contributing resources.

Custom Search

visitors in world