Technology

present software technlogies

Hologram Basics:





There are a few basic things to learn about holograms.
First, I will briefly explain the two theories behind the hologram.
1) the wave interference pattern, and 2) the coherency of light.

Then, I will describe how holograms store and project information.
1. Interference Pattern:

    The hologram is based upon Nobel Prize winner Dennis Gabor's theory concerning interference patterns. Gabor theorized in 1947 that each crest of the wave pattern contains the whole information of its original source, and that this information could be stored on film and reproduced. This is why it is called a hologram.

    hologram diagram A pebble, dropped in a still pond, is the most basic example used to describe the wave interference process. If you drop a pebble into a pond, it creates an infinitely expanding circular wave pattern. If you drop two pebbles into a pond the waves' crests would eventually meet. The intersecting points of the waves' crests are called the points of interference. The interference of two or more waves will carry the whole information about all the waves.
2. Coherent Light:

                    Gabor recorded several images holographically, but wasn't successful at producing a clear image because he could only use forms of incoherent, white light.                                                                                                               An example of incoherent light would be if you were watching cars coming out of a tunnel, you would likely see many different models and types of cars, traveling at different speeds and at different lengths apart. Now, suppose you started seeing the same model and type of cars, all heading down the highway at the same speed, and the same distance apart. This would be an example of coherent light. Holograms need coherent light to record or playback the image clearly.                                                                                                                                          
    The L.A.S.E.R. (Light Amplified by Stimulated Emission of Radiation) was invented to produce coherent light. Incoherent light travels in different frequencies and in different phases. Coherent light travels in the same frequency and in the same phase. (100% coherent light is rare) It is important to use light which is coherent because the information is carried on the crest of each wave. The more points of intersection, the more information.
                                                                                                                                                                                                                                                                                                                              3. Storing Information:

    Unlike a camera, which has only one point of light reference, a hologram has two or more points of light references. The intersection points of the two light waves contain the wholeinformation of both reference points. A LASER is used as the light source so the waves are coherent.





    A LASER is projected onto a partially silvered mirror called a beam splitter. This mirror splits the original beam into two beams. One beam travels through a lens that diffuses the light onto the object being recorded. This light, called the object beam, is reflected off the object onto the film plate. The second beam is bounced off a mirror and then through a lens that diffuses the light directly onto the film. This beam is called the reference beam. The same light source needs to be used for both beam so the waves will have perfect intersection points.



             

    To add motion (time) to your holograph, you would turn the object, or move the mirrors and lenses, and shoot again onto the same film. The original waves recorded on the film, will intersect with the waves from the new perspective.


                                       

TCP/IP is a set of protocols developed to allow cooperating computers to share resources across a network. It was developed by a community of researchers centered around the ARPAnet. Certainly the ARPAnet is the best-known TCP/IP network. However as of June, 87, at least 130 different vendors had products that support TCP/IP, and thousands of networks of all kinds use it.

The most accurate name for the set of protocols we are describing is the "Internet protocol suite". TCP and IP are two of the protocols in this suite. (They will be described below.) Because TCP and IP are the best known of the protocols, it has become common to use the term TCP/IP or IP/TCP to refer to the whole family. It is probably not worth fighting this habit. However this can lead to some oddities. For example, I find myself talking about NFS as being based on TCP/IP, even though it doesn't use TCP at all. (It does use IP. But it uses an alternative protocol, UDP, instead of TCP. All of this alphabet soup will be unscrambled in the following pages.)
The Internet is a collection of networks, including the Arpanet, NSFnet, regional networks such as NYsernet, local networks at a number of University and research institutions, and a number of military networks. The term "Internet" applies to this entire set of networks.

TCP/IP is a layered set of protocols. In order to understand what this means, it is useful to look at an example. A typical situation is sending mail. First, there is a protocol for mail. This defines a set of commands which one machine sends to another, e.g. commands to specify who the sender of the message is, who it is being sent to, and then the text of the message. However this protocol assumes that there is a way to communicate reliably between the two computers.

Mail, like other application protocols, simply defines a set of commands and messages to be sent. It is designed to be used together with TCP and IP. TCP is responsible for making sure that the commands get through to the other end.                                      It keeps track of what is sent, and retransmitts anything that did not get through. If any message is too large for one datagram, e.g. the text of the mail, TCP will split it up into several datagrams, and make sure that they all arrive correctly. Since these functions are needed for many applications, they are put together into a separate protocol, rather than being part 5 of the specifications for sending mail. 



Definition:
Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems,  making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2 �]8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Features:
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services
o IP based mobile system
o High speed, high capacity, and low cost per bit
o Global access, service portability, and scalable mobile services
o Seamless switching, and a variety of Quality of Service driven services
o Better scheduling and call admission control techniques
o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem)
o Better spectral efficiency
o Seamless network of multiple protocols and air interfaces (since 4G will be all �]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN).
o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which  are concurently developed.


 The two most important phenomena impacting telecommunications over the past decade have been explosive parallel growth of both the internet and mobile telephone services. The internet brought the benefits of data communications to the masses with email, the web, and ecommerce; while mobile service has enabled "follow-me anywhere/always on" telephony. The internet helped accelerate the trend from voice-centric to data-centric networking. Data already exceeds voice traffic and the data share continues to grow. Now these two worlds are converging. This convergence offers the benefits of new interactive multimedia services coupled to the flexibility and mobility of wireless. To realize the full potential of this convergence, however, we need broadband access connections.

Here we compare and contrast two technologies that are likely to play important roles: Third Generation mobile ("3G") and Wireless Local Area Networks ("WLAN") . The former represents a natural evolution and extension of the business models of existing mobile providers. In contrast, the WiFi approach would leverage the large installed base of WLAN infrastructure already in place. We use 3G and WiFi as shorthand for the broad classes of related technologies that have two quiet distinct industry origins and histories.

Speaking broadly, 3G offers a vertically -integrated , top -down , service - provider approach to delivering wireless internet access , while WiFi offers an end -user -centric , decentralized approach to service provisioning. We use these two technologies to focus our speculations on the potential tensions between these two alternative world views. The wireless future will include a mix of heterogenous wireless access technologies. Moreover, we expect that the two world views will converge such that vertically-integrated service providers will integrate WiFi or other WLAN technologies into their 3G or wire line infrastructure when this make sense. The multiplicity of potential wireless access technologies and /or business models provided some hope that we may be able to realize robust facilities - based competition for broadband local access services. If this occurs, it would help solve the "last mile" competition problem that has been deviled telecommunication policy.

SOME BACKGROUND ON WiFi AND 3G

3G:
3G is a technology for mobile service providers. Mobile services are provided by service providers that own and operate their own wireless networks and sell mobile services to and -users. Mobile service providers use licensed spectrum to provide wireless telephone coverage over some relatively large contiguous geographic service area. Today it may include the entire country. From a user's perspective, the key feature of mobile service is that it offers ubiquitous and continuous coverage. To support the service, mobile operators maintain a network of interconnected and overlapping mobile base stations that hand-off customers as those customers move among adjacent cells. Each mobile base station may support user's upto several kilometers away. The cell towers are connected to each other by a backhaul network that also provides interconnection to the wire line Public Switched Telecommunications Network (PSTN) and other services. The mobile system operator owns the end-to-end network from the base stations to the backhaul networks to the point of interconnection to the PSTN. Third
Generations (3G) mobile technologies will support higher bandwidth digital communications. To expand the range and capability of data services that can be supported by digital mobile systems, service providers will have to upgrade their networks to one of the 3G technologies which can support data rates of from 384Kbps up to 2Mbps.

WiFi
WiFi is the popular name for the wireless Ethernet 802.11b standard for WLANs . WiFi allows collections of PCs, terminals ,and other distributed computing devices to share resources and peripherals such as printers, access servers etc. One of the most popular LAN technologies was Ethernet.

HOW ARE WiFi AND 3G SAME
From the preceding discussion, it might appear that 3G and WiFi address completely different user needs in quiet distinct markets that do not overlap. While this was certainly more true about earlier generations of mobile services when compared with wired LANs or earlier versions of WLANs , it is increasingly not the case. The end- user does not care what technology is used to support his service. What matter is that both of these technologies are providing platforms for wireless access to the internet and other communication services.

Mobile positioning technology has become an important area of research, for emergency as well as for commercial services. Mobile positioning in cellular networks will provide several services such as, locating stolen mobiles, emergency calls, different billing tariffs depending on where the call is originated, and methods to predict the user movement inside a region. The evolution to location-dependent services and applications in wireless systems continues to require the development of more accurate and reliable mobile positioning technologies. The major challenge to accurate location estimation is in creating techniques that yield acceptable performance when the direct path from the transmitter to the receiver is intermittently blocked. This is the Non-Line-Of-Sight (NLOS) problem, and it is known to be a major source of error since it systematically causes mobile to appear farther away from the base station (BS) than it actually is, thereby increasing the positioning error.

NEED FOR MOBILE TRACKING

Recent demands from new applications require positioning capabilities of mobile telephones or other devices. The ability to obtain the geo-location of the Mobile Telephone (MT) in the cellular system allows the network operators to facilitate new services to the mobile users. The most immediate motivation for the cellular system to provide MT position is enhanced in accident emergency services. The positioning of the mobile user could provide services like

¢ Emergency service for subscriber safety.
¢ Location sensitive billing.
¢ Cellular Fraud detection.
¢ Intelligent transport system services.
¢ Efficient and effective network performance and management.

Location Tracking Curve Method
The method proposed by us for tracking the location of a mobile telephone using curves connecting the points where circles intersect one another, the circles radii being the distances between BSs and the mobile                                                                     telephone. The steps involved are:

a. Each base station nearer to a mobile telephone receives a predetermined signal from the mobile telephone and calculates the distance between the mobile telephone and the base station and the variances of time arrival of the signal at the base station;
b. A circle is drawn to have a radius being the distance and the coordinates of the base station being the center of the circle;
c. A pair of the first and the second base stations is selected among the base stations. A several location tracking curves connecting two intersection points between the selected circles corresponding to the first and the second base stations are drawn. One of the location tracking curves is selected using the variances of the first and the second base stations;
d. The steps c. and d. are repeated for the other pairs of the base stations;
e. The intersection points are obtained among the location tracking curves selected in step d. and e. and,
f. The location of the mobile telephone is determined using the coordinates of the intersection points obtained in step e.

The several location tracking curves are parts of circles with centers near to the base station with smaller variances between the first and the second base stations. The circles formed by the location tracking curves have the centers on a line connecting the coordinates of the first and the second base stations. The larger variances between the variances of the first and the second base stations are compared to the variances of the several location tracking curves, and one of the location tracking curves is selected according to the comparison result. The location coordinates of the mobile telephone are determined by averaging the coordinates of the intersection points obtained in step (f).

Definition
Over the last decade, the growth of satellite service, the rise of digital cable, and the birth of HDTV have all left their mark on the television landscape. Now, a new delivery method threatens to shake things up even more powerfully. Internet Protocol Television (IPTV) has arrived, and backed by the deep pockets of the telecommunications industry, it's poised to offer more interactivity and bring a hefty dose of competition to the business of selling TV.

IPTV describes a system capable of receiving and displaying a video stream encoded as a series of Internet Protocol packets. If you've ever watched a video clip on your computer, you've used an IPTV system in its broadest sense. When most people discuss IPTV, though, they're talking about watching traditional channels on your television, where people demand a smooth, high-resolution, lag-free picture, and it's the Telco's that are jumping headfirst into this market. Once known only as phone companies, the Telco's now want to turn a "triple play" of voice, data, and video that will retire the side and put them securely in the batter's box. In this primer, we'll explain how IPTV works and what the future holds for the technology. Though IP can (and will) be used to deliver video over all sorts of networks, including cable systems.
How It Works

First things first: the venerable set-top box, on its way out in the cable world, will make resurgence in IPTV systems. The box will connect to the home DSL line and is responsible for reassembling the packets into a coherent video stream and then decoding the contents. Your computer could do the same job, but most people still don't have an always-on PC sitting beside the TV, so the box will make a comeback. Where will the box pull its picture from? To answer that question, let's start at the source.

Most video enters the system at the Telco's national head end, where network feeds are pulled from satellites and encoded if necessary (often in MPEG-2, though H.264 and Windows Media are also possibilities). The video stream is broken up into IP packets and dumped into the Telco's core network, which is a massive IP network that handles all sorts of other traffic (data, voice, etc.) in addition to the video. Here the advantages of owning the entire network from stem to stern (as the Telco's do) really come into play, since quality of service (QoS) tools can prioritize the video traffic to prevent delay or fragmentation of the signal. Without control of the network, this would be dicey, since QoS requests are not often recognized between operators. With end-to-end control, the Telco's can guarantee enough bandwidth for their signal at all times, which is key to providing the "just works" reliability consumers have come to expect from their television sets.

The video streams are received by a local office, which has the job of getting them out to the folks on the couch. This office is the place that local content (such as TV stations, advertising, and video on demand) is added to the mix, but it's also the spot where the IPTV middleware is housed. This software stack handles user authentication, channel change requests, billing, VoD requests, etc.-basically, all of the boring but necessary infrastructure.

All the channels in the lineup are multicast from the national headend to local offices at the same time, but at the local office, a bottleneck becomes apparent. That bottleneck is the local DSL loop, which has nowhere near the capacity to stream all of the channels at once. Cable systems can do this, since their bandwidth can be in the neighborhood of 4.5Gbps, but even the newest ADSL2+ technology tops out at around 25Mbps (and this speed drops quickly as distance from the DSLAM [DSL Access Multiplier] grows).

 A computer virus is a self-replicating program containing code that explicitly copies itself and that can "infect" other programs by modifying them or their environment such that a call to an infected program implies a call to a possibly evolved copy of the virus.

These software "pranks" are very serious; they are spreading faster than they are being stopped, and even the least harmful of viruses could be life-threatening. For example, in the context of a hospital life-support system, a virus that "simply" stops a computer and displays a message until a key is pressed, could be fatal. Further, those who create viruses can not halt their spread, even if they wanted to. It requires a concerted effort from computer users to be "virus-aware", rather than continuing the ambivalence that has allowed computer viruses to become such a problem.

Computer viruses are actually a special case of something known as "malicious logic" or "malware".
Consider the set of programs which produce one or more programs as output. For any pair of programs p and q, p eventually produces q if and only if p produces q either directly or through a series of steps (the "eventually produces" relation is the transitive closure of the "produces" relation.) A viral set is a maximal set of programs V such that for every pair of programs p and q in V, p eventually produces q, and q eventually produces p. ("Maximal" here means that there is no program r not in the set that could be added to the set and have the set still satisfy the conditions.) For the purposes of this paper, a computer virus is a viral set; a program p is said to be an instance of, or to be infected with, a virus V precisely when p is a member of the viral set V. A program is said to be infected simpliciter when there is some viral set V of which it is a member. A program which is an instance of some virus is said to spread whenever it produces another instance of that virus. The simplest virus is a viral set that contains exactly one program, where that program simply produces itself. Larger sets represent polymorphic viruses, which have a number of different possible forms, all of which eventually produce all the others.

An optical mouse is an advanced computer pointing device that uses a light-emitting diode (LED), an optical sensor, and digital signal processing (DSP) in place of the traditional mouse ball and electromechanical transducer. Movement is detected by sensing changes in reflected light, rather than by interpreting the motion of a rolling sphere.

The optical mouse takes microscopic snapshots of the working surface at a rate of more than 1,000 images per second. If the mouse is moved, the image changes. The tiniest irregularities in the surface can produce images good enough for the sensor and DSP to generate usable movement data. The best surfaces reflect but scatter light; an example is a blank sheet of white drawing paper. Some surfaces do not allow the sensor and DSP to function properly because the irregularities are too small to be detected. An example of a poor optical-mousing surface is unfrosted glass.

In practice, an optical mouse does not need cleaning, because it has no moving parts. This all-electronic feature also eliminates mechanical fatigue and failure. If the device is used with the proper surface, sensing is more precise than is possible with any pointing device using the old electromechanical design. This is an asset in graphics applications, and it makes computer operation easier in general.





The Firewalls and Internet Security seminar defines three basic types of firewalls: packet filters, circuit level gateways, and application gateways. Of course there are also hybrid firewalls, which can be combinations of all three.
                                                                           Packet filter gateways are usually comprised of a series of simple checks based on the source and destination IP address and ports. They are very simple to the user since it will probably not even realize that the checks are taking place (unless of course it was denied!!). However, that simplicity is also their biggest problem: there is no way for the filter to securely distinguish one user from another. Packet filters are frequently located on routers and most major router vendors supply packet filters as part of the default distribution. You may have heard of smart packet filters.

Smart packet filters are really not very different from simple packet filters except they have the ability to interpret the data stream and understand that other connections, which would normally be denied, should be allowed (e.g. ftp's PORT command would be understood and the reverse connection allowed). Smart packet filters, however, still cannot securely distinguish one user on a machine from another. Brimstone incorporates a very smart and configurable application layer filter.

Circuit-level gateways are much like packet filters except that they operate at a different level of the OSI protocol stack. Unlike most packet filters, connections passing through a circuit-level gateway appear to the remote machine as if they originated from the firewall. This is very useful to hide information about protected networks. Socks is a popular de-facto standard for automatic circuit-level gateways. Brimstone supports both Socks and a manual circuit-level gateway.

Application gateways represent a totally different concept for firewalls. Instead of a list of simple rules which control which packets or sessions should be allowed through, a program accepts the connection, typically performs strong authentication on the user which often requires one-time passwords, and then often prompts the user for information on what host to connect to. This is, in some senses, more limited than packet-filters and circuit-level gateways since you must have a gateway program for each applications (e.g. telnet, ftp, X11, etc). However, for most environments it provides much higher security because unlike the other types of gateways, it can perform strong user authentication to ensure that the person on the other end of the IP connection is really who they say they are. Additionally, once you know who you are talking to, you can perform other types of access checks on a per-user basis such as what times they can connect, what hosts they can connect to, what services they can use, etc. Many people only consider application gateways to be true firewall, because of the lack of user authentication in the other two types. The core Brimstone ACL provides application gateway functionality.

          Hybrid gateways are ones where the above types are combined. Quite  frequently one finds an application gateway combined with a circuit-level gateways or packet filters, since it can allow internal hosts unencumbered access to unsecured networks while forcing strong security on connects from unsecured networks into the secured internal networks. Recommended Brimstone configurations are a hybrid firewall.



Definition
The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the IT industry to keep them running. And it's not just a matter of numbers. It's the complexity of these systems and the way they work together that is creating a shortage of skilled ITworkers to manage all of the systems. It's a problem that is not going away, but will grow exponentially, just as our dependence on technology has.
The solution is to build computer systems that regulate themselves much in the same way our autonomic nervous system regulates and protects our bodies. This new model of computing is called autonomic computing. The good news is that some components of this technology are already up and running. However, complete autonomic systems do not yet exist. Autonomic computing calls for a whole new area of study and a whole new way of conducting business.



The Benefits
Autonomic computing was conceived to lessen the spiraling demands for skilled IT resources, reduce complexity and to drive computing into a new era that may better exploit its potential to support higher order thinking and decision making. Immediate benefits will include reduced dependence on human intervention to maintain complex systems accompanied by a substantial decrease in costs. Long-term benefits will allow individuals, organizations and businesses to collaborate on complex problem solving.



The Problem
Within the past two decades the development of raw computing power coupled with the proliferation of computer devices has grown at exponential rates. This phenomenal growth along with the advent of the Internet have led to a new age of accessibility - to other people, other systems, and most importantly, to information. This boom has also led to unprecedented levels of complexity.

The simultaneous explosion of information and integration of technology into everyday life has brought on new demands for how people manage and maintain computer systems. Demand is already outpacing supply when it comes to managing complex, and even simple computer systems. Even in uncertain economic times, demand for skilled IT workers is expected to increase by over 100 percent in the next six years.

As access to information becomes omnipresent through PC's, hand-held and wireless devices, the stability of current infrastructure, systems, and data is at an increasingly greater risk to suffer outages and general disrepair. IBM believes that we are quickly reaching a threshold moment in the evolution of the industry's views toward computing in general and the associated infrastructure, middleware, and services that maintain them. The increasing system complexity is reaching a level beyond human ability to manage and secure.

This increasing complexity with a shortage of skilled IT professionals points towards an inevitable need to automate many of the functions associated with computing today.



The Solution
IBM's proposed solution looks at the problem from the most important perspective: the end user's. How do IT customers want computing systems to function? They want to interact with them intuitively, and they want to have to be far less involved in running them. Ideally, they would like computing systems to pretty much take care of the mundane elements of management by themselves.
The most direct inspiration for this functionality that exists today is the autonomic function of the human central nervous system. Autonomic controls use motor neurons to send indirect messages to organs at a sub-conscious level. These messages regulate temperature, breathing, and heart rate without conscious thought. The implications for computing are immediately evident; a network of organized, "smart" computing components that give us what we need, when we need it, without a conscious mental or even physical effort.
IBM has named its vision for the future of computing "autonomic computing." This new paradigm shifts the fundamental definition of the technology age from one of computing, to one defined by data. Access to data from multiple, distributed sources, in addition to traditional centralized storage devices will allow users to transparently access information when and where they need it. At the same time, this new view of computing will necessitate changing the industry's focus on processing speed and storage to one of developing distributed networks that are largely self-managing, self-diagnostic, and transparent to the user. 


Definition:


Using an ordinary phone for most people is a common daily occurrence as is listening to your favorite CD containing the digitally recorded music. It is only a small extension to these technologies in having your voice transmitted in data packets. The transmission of voice in the phone network was done originally using an analog signal but this has been replaced in much of the world by digital networks. Although many of our phones are still analog, the network that carries that voice has become digital.

In todays phone networks, the analog voice going into our analog phones is digitized as it enters the phone network. This digitization process, shown in Figure 1 below, records a sample of the loudness (voltage) of the signal at fixed intervals of time. These digital voice samples travel through the network one byte at a time.
At the destination phone line, the byte is put into a device that takes the voltage number and produces that voltage for the destination phone. Since the output signal is the same as the input signal, we can understand what was originally spoken.
The evolution of that technology is to take numbers that represent the voltage and group them together in a data packet similar to the way computers send and receive information to the Internet. Voice over IP is the technology of taking units of sampled speech data .
So at its most basic level, the concept of VoIP is straightforward. The complexity of VoIP comes in the many ways to represent the data, setting up the connection between the initiator of the call and the receiver of the call, and the types of networks that carry the call.

Using data packets to carry voice is not just done using IP packets. Although it won't be discussed, there is also voice over Frame Relay (VoFR) and Voice over ATM (VoATM) technologies. Many of the issues VoIP being discussed also apply to the other packetized voice technologies.
The increasing multimedia contents in Internet have reduced drastically the objections to putting voice on data networks. Basically, the Internet objections to putting voice on data networks. Basically, the Internet Telephony is to transmit multimedia information in discrete packets like voice or video over Internet or any other IP-based Local Area Network (LAN) or Wide Area Network (WAN). The commercial Voice Over IP (Internet Protocol) was introduced in early 1995 when VocalTec introduced its Internet telephone software. Because the technologies and the market have gradually reached their maturity, many industry leading companies have developed their products for Voice Over IP applications since 1995
VoIP, or "Voice over Internet Protocol" refers to sending voice and fax phone calls over data networks, particularly the Internet. This technology offers cost savings by making more efficient use of the existing network.

Traditionally, voice and data were carried over separate networks optimized to suit the differing characteristics of voice and data traffic. With advances in technology, it is now possible to carry voice and data over the same networks whilst still catering for the different characteristics required by voice and data.
Voice-over-Internet-Protocol (VOIP) is an emerging technology that allows telephone calls or faxes to be transported over an IP data network.


Introduction:The number of subscribers with wireless access to the Internet through laptops, personal digital assistants (PDAs), cellular phones, pagers, and other wireless devices is rapidly increasing. In 1998, 1.2 million people had wireless web access. IDC predicts that in 2003 the number of wireless Internet subscribers will be 40.4 million. Because this market is growing at such a fast rate, content providers see an opportunity to enter the market by forming partnerships with wireless carriers to deliver data applications to wireless devices. In fact, companies solely dedicated to this type of service are starting to appear.

Analysts predict that e-commerce will be a key application for wireless Internet access. Buying books, trading stocks, reserving hotel rooms and renting cars from anywhere will be easy and consumers will demand these types of services. IDC states that the wireless Internet transaction value in 1998 was $4.3 billion. This number is expected to increase to $38 billion in 2003. IDC predicts that carriers will eventually charge a flat monthly fee for wireless access. Fees for wireless access will drop to be equal to or less than voice services in the next few years, allowing most people to afford wireless access to the Internet.

Wireless Content: Internet Portals
The Strategis Group defines a wireless portal as "a customized point of entry through which a wireless subscriber can access a limited number of Internet sites and services." Many wireless carriers offer internet content to their subscribers through partnerships with some of the large internet content portal companies. For example, AT&T offers its wireless Internet Digital PocketNet subscribers content from ABCNews.com, Bloomberg.com, AOL, and ESPN.com. Sprint PCS partners with AOL, CNN.com, Amazon.com and The Weather Channel. Other wireless carriers, such as USWest and AirTouch, have similar deals. Wireless networks that transmit data at speeds equivalent to or under 56 Kbps, or narrowband networks, are currently more readily available today than wireless broadband networks. Data delivery to wireless devices will be restricted by narrowband networks. Access to graphics and content best accessed through high-speed connections will be limited. Instead, time-sensitive and personalized data delivery, as well as e-commerce activities, will fuel the initial drive for the wireless content market.

Wireless portals will be targeted toward broad consumer markets and toward vertical business markets. It is expected that portals will serve as personalized information aggregators for end-users. Corporate wireless portal solutions may offer secure end-to-end wireless connectivity for business end-users, similar to a wireline intranet. Analysts expect two types of consumer portals to appear. "Push" portals will enable the end-user to set up custom information that they would like delivered to them periodically. "Push and pull" portals will both deliver personalized content to the end-user and allow the end-user to search the portal for information. Corporate wireless portal solutions will provide customized services like time sheet and expense report monitoring and integration, billing capabilities, sales force automation, and access to inventory databases.

Definition
Bluetooth wireless technology is a cable replacement technology that provides wireless communication between portable devices, desktop devices and peripherals. It is used to swap data and synchronize files between devices without having to connect each other with cable. The wireless link has a range of 10m which offers the user mobility. This technology can be used to make wireless data connection to conventional local area networks (LAN) through an access point. There is no need for the user to open an application or press button to initiate a process. Bluetooth wireless technology is always on and runs in the background. Bluetooth devices scan for other Bluetooth devices and when these devices are in range they start to exchange messages so they can become aware of each others capabilities. These devices do not require a line of sight to transmit data with each other.

Within a few years about 80 percent of the mobile phones are expected to carry the Bluetooth chip. The Bluetooth transceiver operates in the globally available unlicensed ISM radio band of 2.4GHz. The ISM bands include frequency range at 902MHz to 928MHz and 2.4GHz to 2.484GHZ which do not require operator license from a regulatory agency. This means that Bluetooth technology can be used virtually anywhere in the world. Another type of wireless technology that is being used nowadays is infrared signals. The choice of using either one of the wireless technology will depend on the application for which it is being used. Bluetooth is an economical, wireless solution that is convenient, reliable, easy to use and operates over a longer distance than infrared. The initial development started in 1994 by Ericsson. Bluetooth now has a special interest group (SIG) which has 1800 companies worldwide. Bluetooth technology enables voice and data transmission in a short-range radio.

There is a wide range of devises which can be connected easily and quickly without the need for cables. Soon people world over will enjoy the convenience, speed and security of instant wireless connection. Bluetooth is expected to be embedded in hundreds of millions mobile phones, PCs, laptops and a whole range of other electronic devices in the next few years. This is mainly because of the elimination of cables and this makes the work environment look and feel comfortable and inviting.



Origin Of Bluetooth
In 1994, Ericsson Mobile Communication initiated a study to investigate the feasibility of a low power, low cost radio interface between mobile phones and their accessories. The aim of the study was to find a way to eliminate cables between mobile phones and PC cards, headsets, desktops and other devices. The study was part of a large project investigating how different communication devices could be connected to the cellular network via a mobile phone. Ericsson's work in this area caught the attention of IBM, Intel, Nokia, and Toshiba. The companies formed the special interest group (SIG) in May 1998, which grew to over 1500 member companies by April 2000. The company jointly developed the Bluethooth 1.0 specifications, which was released in July 1999.

The engineers at Ericsson code-named the new wireless technology Bluetooth to honor a 10th century Viking king in Denmark. Harald Bluetooth reigned from 940 to 985 and is credited not only with uniting that country, but with establishing Christianity there as well. Harald's name was actually Blåtand, which roughly translates into English as 'Bluetooth'. This has nothing to do with the color of his teeth- some claim he neither brushed, nor flossed. Blåtand actually referred to Harald's very dark hair, which was unusual for Viking. Other Viking states included Norway and Sweden, which is the connection with Ericsson (literally, Eric's son) and its selection of Bluetooth as the code- name for this wireless technology

Definition

The European Space Agency (ESA) has programmes underway to place Satellites carrying optical terminals in GEO orbit within the next decade. The first is the ARTEMIS technology demonstration satellite which carries both microwave and SILEX (Semiconductor Laser Intro satellite Link Experiment) optical interorbit communications terminal. SILEX employs direct detection and GaAIAs diode laser technology; the optical antenna is a 25cm diameter reflecting telescope.

The SILEX GEO terminal is capable of receiving data modulated on to an incoming laser beam at a bit rate of 50 Mbps and is equipped with a high power beacon for initial link acquisition together with a low divergence (and unmodulated) beam which is tracked by the communicating partner. ARTEMIS will be followed by the operational European data relay system (EDRS) which is planned to have data relay Satellites (DRS). These will also carry SILEX optical data relay terminals.

Once these elements of Europe's space Infrastructure are in place, these will be a need for optical communications terminals on LEO satellites which are capable of transmitting data to the GEO terminals. A wide range of LEO space craft is expected to fly within the next decade including earth observation and science, manned and military reconnaissance system.

The LEO terminal is referred to as a user terminal since it enables real time transfer of LEO instrument data back to the ground to a user access to the DRS s LEO instruments generate data over a range of bit rates extending of Mbps depending upon the function of the instrument. A significant proportion have data rates falling in the region around and below 2 Mbps. and the data would normally be transmitted via an S-brand microwave IOL

ESA initiated a development programme in 1992 for LEO optical IOL terminal targeted at the segment of the user community. This is known as SMALL OPTICAL USER TERMINALS (SOUT) with features of low mass, small size and compatibility with SILEX. The programme is in two phases. Phase I was to produce a terminal flight configuration and perform detailed subsystem design and modelling. Phase 2 which started in september 1993 is to build an elegant bread board of the complete terminal.


Definition

Although there is good road safety performance the number of people killed and injured on our roads remain unacceptably high. So the roads safety strategy was published or introduced to support the new casualty reduction targets. The road safety strategy includes all forms of invention based on the engineering and education and enforcement and recognizes that there are many different factors that lead to traffic collisions and casualties. The main reason is speed of vehicle. We use traffic lights and other traffic manager to reduce the speed. One among them is speed cameras.

Speed cameras on the side of urban and rural roads, usually placed to catch transgressors of the stipulated speed limit for that road. The speed cameras, the solely to identify and prosecute those drivers that pass by the them when exceed the stipulated speed limit.

At first glance this seemed to be reasonable that the road users do not exceed the speed limit must be a good thing because it increases road safety, reduces accidents and protect other road users and pedestrians.
So speed limits are good idea. To enforce these speed limit; laws are passed making speed an offence and signs are erected were of to indicate the maximum permissible speeds. The police can't be every where to enforce the speed limit and so enforcement cameras art director to do this work; on one who's got an ounce of Commons sense, the deliberately drive through speed camera in order fined and penalized .
So nearly everyone slowdown for the speed Camera. We finally have a solution to the speeding problem. Now if we are to assume that speed cameras are the only way to make driver's slowdown, and they work efficiently, then we would expect there to be a great number of these every were.

Definition

RF light sources follow the same principles of converting electrical power into visible radiation as conventional gas discharge lamps. The fundamental difference between RF lamps and conventional lamps is that RF lamps operate without electrodes .the presence of electrodes in conventional florescent and High Intensity Discharge lamps has put many restrictions on lamp design and performance and is a major factor limiting lamp life.

Recent progress in semiconductor power switching electronics, which is revolutionizing many factors of the electrical industry, and a better understanding of RF plasma characteristics, making it possible to drive lamps at high frequencies.The very first proposal for RF lighting, as well as the first patent on RF lamps, appeared about 100years ago, a half century before the basic principles lighting technology based on gas discharge had been developed.

Discharge tubes
Discharge Tube is the device in which a gas conducting an electric current emits visible light. It is usually a glass tube from which virtually all the air has been removed (producing a near vacuum), with electrodes at each end. When a high-voltage current is passed between the electrodes, the few remaining gas atoms (or some deliberately introduced ones) ionize and emit coloured light as they conduct the current along the tube. T

he light originates as electrons change energy levels in the ionized atoms. By coating the inside of the tube with a phosphor, invisible emitted radiation (such as ultraviolet light) can produce visible light; this is the principle of the fluorescent lamp. We will consider different kinds of RF discharges and their advantages and restrictions for lighting applications.

Definition

The Wireless Application Protocol Forum is an industry group dedicated to the goal of enabling sophisticated telephony and information services on handheld wireless devices. These devices include mobile telephones, pagers, personal digital assistants (PDAs) and other wireless terminals. Recognizing the value and utility of the World Wide Web architecture, the WAP forum has chosen to align its technology closely with the Internet and the Web. The WAP specification extends and leverages existing technologies, such as IP, HTTP, XML, SSL, URLs, scripting and other content formats. Ericsson, Motorola, Nokia and Unwired Planet founded the WAP Forum in June, 1997.
Since then, it has experienced impressive membership growth with members joining from the ranks of the world's premiere wireless service providers, handset manufacturers, infrastructure providers, and software developers. WAP Forum membership is open to all industry participants.



Goals of WAP Forum
The WAP Forum has the following goals:

To bring Internet content and advanced data services to Wireless phones and other wireless terminals.

To create a global wireless protocol specification that works across all wireless network technologies.

To enable the creation of content and applications that scale across a wide range of wireless bearer networks and device types.

To embrace and extend existing standards and technology wherever possible and appropriate.

It is also very important for the WAP Forum's specification in such a way that they complement existing standards. For example, the WAPV1.0 specification is designed to sit on top of existing bearer channel standards so that any bearer standard can be used with the WAP protocols to implement complete product solutions. When the WAP Forum identifies a new area of technology where a standard does not exist, or exists but needs modification for wireless, it works to submit its specifications to other industry standard groups.



WAP Protocol Stack
Any network is organized as a series of layers or levels where each level performs a specific function. The set of rules that governs the communication between the peer entities within a layer are called protocols. The layers and protocols together forms the Protocol Stack. The request from the mobile device is sent as a URL through the operator's network to the WAP gateway, which is the interface between

Definition

In a constantly changing industry, DVI is the next major attempt at an all-in-one, standardized, universal connector for audio/video applications. Featuring a modern design and backed by the biggest names in the electronic industry, DVI is set to finally unify all digital media components with a single cable, remote, and interface.
DVI is built with a 5 Gbps bandwidth limit, over twice that of HDTV (which runs at 2.2 Gbps), and is built forwards-compatible by offering unallocated pipeline for future technologies. The connectors are sliding contact (like FireWire and USB) instead of screw-on (like DVI), and are not nearly as bulky as most current video interfaces.

The screaming bandwidth of HDMI is structured around delivering the highest-quality digital video and audio throughout your entertainment center. Capable of all international frequencies and resolutions, the HDMI cable will replace all analog signals (i.e. S-Video, Component, Composite, and Coaxial), as well as HDTV digital signals (i.e. DVI, P&D, DFP), with absolutely no compromise in quality.
Additionally, HDMI is capable of carrying up to 8 channels of digital-audio, replacing the old analog connections (RCA, 3.5mm) as well as optical formats (SPDIF, Toslink).

VIDEO INTERFACES

Video Graphics Array (VGA) is an analog computer display standard first marketed in 1987 by IBM. While it has been obsolete for some time, it was the last graphical standard that the majority of manufacturers decided to follow, making it the lowest common denominator that all PC graphics hardware supports prior to a device-specific driver being loaded. For example, the Microsoft Windows splash screen appears while the machine is still operating in VGA mode, which is the reason that this screen always appears in reduced resolution and color depth.

The term VGA is often used to refer to a resolution of 640×480, regardless of the hardware that produces the picture. It may also refer to the 15-pin D-subminiature VGA connector which is still widely used to carry analog video signals of all resolutions.

VGA was officially superseded by IBM's XGA standard, but in reality it was superseded by numerous extensions to VGA made by clone manufacturers that came to be known as "Super VGA".

A Male DVI-I Plug
The DVI interface uses a digital protocol in which the desired brightness of pixels is transmitted as binary data. When the display is driven at its native resolution, all it has to do is read each number and apply that brightness to the appropriate pixel. In this way, each pixel in the output buffer of the source device corresponds directly to one pixel in the display device, whereas with an analog signal the appearance of each pixel may be affected by its adjacent pixels as well as by electrical noise and other forms of analog distortion.
Previous standards such as the analog VGA were designed for CRT-based devices and thus did not use discrete time. As the analog source transmits each horizontal line of the image, it varies its output voltage to represent the desired brightness. In a CRT device, this is used to vary the intensity of the scanning beam as it moves across the screen.

Custom Search

visitors in world