Technology

present software technlogies

Hologram Basics:





There are a few basic things to learn about holograms.
First, I will briefly explain the two theories behind the hologram.
1) the wave interference pattern, and 2) the coherency of light.

Then, I will describe how holograms store and project information.
1. Interference Pattern:

    The hologram is based upon Nobel Prize winner Dennis Gabor's theory concerning interference patterns. Gabor theorized in 1947 that each crest of the wave pattern contains the whole information of its original source, and that this information could be stored on film and reproduced. This is why it is called a hologram.

    hologram diagram A pebble, dropped in a still pond, is the most basic example used to describe the wave interference process. If you drop a pebble into a pond, it creates an infinitely expanding circular wave pattern. If you drop two pebbles into a pond the waves' crests would eventually meet. The intersecting points of the waves' crests are called the points of interference. The interference of two or more waves will carry the whole information about all the waves.
2. Coherent Light:

                    Gabor recorded several images holographically, but wasn't successful at producing a clear image because he could only use forms of incoherent, white light.                                                                                                               An example of incoherent light would be if you were watching cars coming out of a tunnel, you would likely see many different models and types of cars, traveling at different speeds and at different lengths apart. Now, suppose you started seeing the same model and type of cars, all heading down the highway at the same speed, and the same distance apart. This would be an example of coherent light. Holograms need coherent light to record or playback the image clearly.                                                                                                                                          
    The L.A.S.E.R. (Light Amplified by Stimulated Emission of Radiation) was invented to produce coherent light. Incoherent light travels in different frequencies and in different phases. Coherent light travels in the same frequency and in the same phase. (100% coherent light is rare) It is important to use light which is coherent because the information is carried on the crest of each wave. The more points of intersection, the more information.
                                                                                                                                                                                                                                                                                                                              3. Storing Information:

    Unlike a camera, which has only one point of light reference, a hologram has two or more points of light references. The intersection points of the two light waves contain the wholeinformation of both reference points. A LASER is used as the light source so the waves are coherent.





    A LASER is projected onto a partially silvered mirror called a beam splitter. This mirror splits the original beam into two beams. One beam travels through a lens that diffuses the light onto the object being recorded. This light, called the object beam, is reflected off the object onto the film plate. The second beam is bounced off a mirror and then through a lens that diffuses the light directly onto the film. This beam is called the reference beam. The same light source needs to be used for both beam so the waves will have perfect intersection points.



             

    To add motion (time) to your holograph, you would turn the object, or move the mirrors and lenses, and shoot again onto the same film. The original waves recorded on the film, will intersect with the waves from the new perspective.


                                       

TCP/IP is a set of protocols developed to allow cooperating computers to share resources across a network. It was developed by a community of researchers centered around the ARPAnet. Certainly the ARPAnet is the best-known TCP/IP network. However as of June, 87, at least 130 different vendors had products that support TCP/IP, and thousands of networks of all kinds use it.

The most accurate name for the set of protocols we are describing is the "Internet protocol suite". TCP and IP are two of the protocols in this suite. (They will be described below.) Because TCP and IP are the best known of the protocols, it has become common to use the term TCP/IP or IP/TCP to refer to the whole family. It is probably not worth fighting this habit. However this can lead to some oddities. For example, I find myself talking about NFS as being based on TCP/IP, even though it doesn't use TCP at all. (It does use IP. But it uses an alternative protocol, UDP, instead of TCP. All of this alphabet soup will be unscrambled in the following pages.)
The Internet is a collection of networks, including the Arpanet, NSFnet, regional networks such as NYsernet, local networks at a number of University and research institutions, and a number of military networks. The term "Internet" applies to this entire set of networks.

TCP/IP is a layered set of protocols. In order to understand what this means, it is useful to look at an example. A typical situation is sending mail. First, there is a protocol for mail. This defines a set of commands which one machine sends to another, e.g. commands to specify who the sender of the message is, who it is being sent to, and then the text of the message. However this protocol assumes that there is a way to communicate reliably between the two computers.

Mail, like other application protocols, simply defines a set of commands and messages to be sent. It is designed to be used together with TCP and IP. TCP is responsible for making sure that the commands get through to the other end.                                      It keeps track of what is sent, and retransmitts anything that did not get through. If any message is too large for one datagram, e.g. the text of the mail, TCP will split it up into several datagrams, and make sure that they all arrive correctly. Since these functions are needed for many applications, they are put together into a separate protocol, rather than being part 5 of the specifications for sending mail. 



Definition:
Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems,  making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2 �]8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Features:
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services
o IP based mobile system
o High speed, high capacity, and low cost per bit
o Global access, service portability, and scalable mobile services
o Seamless switching, and a variety of Quality of Service driven services
o Better scheduling and call admission control techniques
o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem)
o Better spectral efficiency
o Seamless network of multiple protocols and air interfaces (since 4G will be all �]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN).
o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which  are concurently developed.


 The two most important phenomena impacting telecommunications over the past decade have been explosive parallel growth of both the internet and mobile telephone services. The internet brought the benefits of data communications to the masses with email, the web, and ecommerce; while mobile service has enabled "follow-me anywhere/always on" telephony. The internet helped accelerate the trend from voice-centric to data-centric networking. Data already exceeds voice traffic and the data share continues to grow. Now these two worlds are converging. This convergence offers the benefits of new interactive multimedia services coupled to the flexibility and mobility of wireless. To realize the full potential of this convergence, however, we need broadband access connections.

Here we compare and contrast two technologies that are likely to play important roles: Third Generation mobile ("3G") and Wireless Local Area Networks ("WLAN") . The former represents a natural evolution and extension of the business models of existing mobile providers. In contrast, the WiFi approach would leverage the large installed base of WLAN infrastructure already in place. We use 3G and WiFi as shorthand for the broad classes of related technologies that have two quiet distinct industry origins and histories.

Speaking broadly, 3G offers a vertically -integrated , top -down , service - provider approach to delivering wireless internet access , while WiFi offers an end -user -centric , decentralized approach to service provisioning. We use these two technologies to focus our speculations on the potential tensions between these two alternative world views. The wireless future will include a mix of heterogenous wireless access technologies. Moreover, we expect that the two world views will converge such that vertically-integrated service providers will integrate WiFi or other WLAN technologies into their 3G or wire line infrastructure when this make sense. The multiplicity of potential wireless access technologies and /or business models provided some hope that we may be able to realize robust facilities - based competition for broadband local access services. If this occurs, it would help solve the "last mile" competition problem that has been deviled telecommunication policy.

SOME BACKGROUND ON WiFi AND 3G

3G:
3G is a technology for mobile service providers. Mobile services are provided by service providers that own and operate their own wireless networks and sell mobile services to and -users. Mobile service providers use licensed spectrum to provide wireless telephone coverage over some relatively large contiguous geographic service area. Today it may include the entire country. From a user's perspective, the key feature of mobile service is that it offers ubiquitous and continuous coverage. To support the service, mobile operators maintain a network of interconnected and overlapping mobile base stations that hand-off customers as those customers move among adjacent cells. Each mobile base station may support user's upto several kilometers away. The cell towers are connected to each other by a backhaul network that also provides interconnection to the wire line Public Switched Telecommunications Network (PSTN) and other services. The mobile system operator owns the end-to-end network from the base stations to the backhaul networks to the point of interconnection to the PSTN. Third
Generations (3G) mobile technologies will support higher bandwidth digital communications. To expand the range and capability of data services that can be supported by digital mobile systems, service providers will have to upgrade their networks to one of the 3G technologies which can support data rates of from 384Kbps up to 2Mbps.

WiFi
WiFi is the popular name for the wireless Ethernet 802.11b standard for WLANs . WiFi allows collections of PCs, terminals ,and other distributed computing devices to share resources and peripherals such as printers, access servers etc. One of the most popular LAN technologies was Ethernet.

HOW ARE WiFi AND 3G SAME
From the preceding discussion, it might appear that 3G and WiFi address completely different user needs in quiet distinct markets that do not overlap. While this was certainly more true about earlier generations of mobile services when compared with wired LANs or earlier versions of WLANs , it is increasingly not the case. The end- user does not care what technology is used to support his service. What matter is that both of these technologies are providing platforms for wireless access to the internet and other communication services.

Mobile positioning technology has become an important area of research, for emergency as well as for commercial services. Mobile positioning in cellular networks will provide several services such as, locating stolen mobiles, emergency calls, different billing tariffs depending on where the call is originated, and methods to predict the user movement inside a region. The evolution to location-dependent services and applications in wireless systems continues to require the development of more accurate and reliable mobile positioning technologies. The major challenge to accurate location estimation is in creating techniques that yield acceptable performance when the direct path from the transmitter to the receiver is intermittently blocked. This is the Non-Line-Of-Sight (NLOS) problem, and it is known to be a major source of error since it systematically causes mobile to appear farther away from the base station (BS) than it actually is, thereby increasing the positioning error.

NEED FOR MOBILE TRACKING

Recent demands from new applications require positioning capabilities of mobile telephones or other devices. The ability to obtain the geo-location of the Mobile Telephone (MT) in the cellular system allows the network operators to facilitate new services to the mobile users. The most immediate motivation for the cellular system to provide MT position is enhanced in accident emergency services. The positioning of the mobile user could provide services like

¢ Emergency service for subscriber safety.
¢ Location sensitive billing.
¢ Cellular Fraud detection.
¢ Intelligent transport system services.
¢ Efficient and effective network performance and management.

Location Tracking Curve Method
The method proposed by us for tracking the location of a mobile telephone using curves connecting the points where circles intersect one another, the circles radii being the distances between BSs and the mobile                                                                     telephone. The steps involved are:

a. Each base station nearer to a mobile telephone receives a predetermined signal from the mobile telephone and calculates the distance between the mobile telephone and the base station and the variances of time arrival of the signal at the base station;
b. A circle is drawn to have a radius being the distance and the coordinates of the base station being the center of the circle;
c. A pair of the first and the second base stations is selected among the base stations. A several location tracking curves connecting two intersection points between the selected circles corresponding to the first and the second base stations are drawn. One of the location tracking curves is selected using the variances of the first and the second base stations;
d. The steps c. and d. are repeated for the other pairs of the base stations;
e. The intersection points are obtained among the location tracking curves selected in step d. and e. and,
f. The location of the mobile telephone is determined using the coordinates of the intersection points obtained in step e.

The several location tracking curves are parts of circles with centers near to the base station with smaller variances between the first and the second base stations. The circles formed by the location tracking curves have the centers on a line connecting the coordinates of the first and the second base stations. The larger variances between the variances of the first and the second base stations are compared to the variances of the several location tracking curves, and one of the location tracking curves is selected according to the comparison result. The location coordinates of the mobile telephone are determined by averaging the coordinates of the intersection points obtained in step (f).

Definition
Over the last decade, the growth of satellite service, the rise of digital cable, and the birth of HDTV have all left their mark on the television landscape. Now, a new delivery method threatens to shake things up even more powerfully. Internet Protocol Television (IPTV) has arrived, and backed by the deep pockets of the telecommunications industry, it's poised to offer more interactivity and bring a hefty dose of competition to the business of selling TV.

IPTV describes a system capable of receiving and displaying a video stream encoded as a series of Internet Protocol packets. If you've ever watched a video clip on your computer, you've used an IPTV system in its broadest sense. When most people discuss IPTV, though, they're talking about watching traditional channels on your television, where people demand a smooth, high-resolution, lag-free picture, and it's the Telco's that are jumping headfirst into this market. Once known only as phone companies, the Telco's now want to turn a "triple play" of voice, data, and video that will retire the side and put them securely in the batter's box. In this primer, we'll explain how IPTV works and what the future holds for the technology. Though IP can (and will) be used to deliver video over all sorts of networks, including cable systems.
How It Works

First things first: the venerable set-top box, on its way out in the cable world, will make resurgence in IPTV systems. The box will connect to the home DSL line and is responsible for reassembling the packets into a coherent video stream and then decoding the contents. Your computer could do the same job, but most people still don't have an always-on PC sitting beside the TV, so the box will make a comeback. Where will the box pull its picture from? To answer that question, let's start at the source.

Most video enters the system at the Telco's national head end, where network feeds are pulled from satellites and encoded if necessary (often in MPEG-2, though H.264 and Windows Media are also possibilities). The video stream is broken up into IP packets and dumped into the Telco's core network, which is a massive IP network that handles all sorts of other traffic (data, voice, etc.) in addition to the video. Here the advantages of owning the entire network from stem to stern (as the Telco's do) really come into play, since quality of service (QoS) tools can prioritize the video traffic to prevent delay or fragmentation of the signal. Without control of the network, this would be dicey, since QoS requests are not often recognized between operators. With end-to-end control, the Telco's can guarantee enough bandwidth for their signal at all times, which is key to providing the "just works" reliability consumers have come to expect from their television sets.

The video streams are received by a local office, which has the job of getting them out to the folks on the couch. This office is the place that local content (such as TV stations, advertising, and video on demand) is added to the mix, but it's also the spot where the IPTV middleware is housed. This software stack handles user authentication, channel change requests, billing, VoD requests, etc.-basically, all of the boring but necessary infrastructure.

All the channels in the lineup are multicast from the national headend to local offices at the same time, but at the local office, a bottleneck becomes apparent. That bottleneck is the local DSL loop, which has nowhere near the capacity to stream all of the channels at once. Cable systems can do this, since their bandwidth can be in the neighborhood of 4.5Gbps, but even the newest ADSL2+ technology tops out at around 25Mbps (and this speed drops quickly as distance from the DSLAM [DSL Access Multiplier] grows).

 A computer virus is a self-replicating program containing code that explicitly copies itself and that can "infect" other programs by modifying them or their environment such that a call to an infected program implies a call to a possibly evolved copy of the virus.

These software "pranks" are very serious; they are spreading faster than they are being stopped, and even the least harmful of viruses could be life-threatening. For example, in the context of a hospital life-support system, a virus that "simply" stops a computer and displays a message until a key is pressed, could be fatal. Further, those who create viruses can not halt their spread, even if they wanted to. It requires a concerted effort from computer users to be "virus-aware", rather than continuing the ambivalence that has allowed computer viruses to become such a problem.

Computer viruses are actually a special case of something known as "malicious logic" or "malware".
Consider the set of programs which produce one or more programs as output. For any pair of programs p and q, p eventually produces q if and only if p produces q either directly or through a series of steps (the "eventually produces" relation is the transitive closure of the "produces" relation.) A viral set is a maximal set of programs V such that for every pair of programs p and q in V, p eventually produces q, and q eventually produces p. ("Maximal" here means that there is no program r not in the set that could be added to the set and have the set still satisfy the conditions.) For the purposes of this paper, a computer virus is a viral set; a program p is said to be an instance of, or to be infected with, a virus V precisely when p is a member of the viral set V. A program is said to be infected simpliciter when there is some viral set V of which it is a member. A program which is an instance of some virus is said to spread whenever it produces another instance of that virus. The simplest virus is a viral set that contains exactly one program, where that program simply produces itself. Larger sets represent polymorphic viruses, which have a number of different possible forms, all of which eventually produce all the others.

An optical mouse is an advanced computer pointing device that uses a light-emitting diode (LED), an optical sensor, and digital signal processing (DSP) in place of the traditional mouse ball and electromechanical transducer. Movement is detected by sensing changes in reflected light, rather than by interpreting the motion of a rolling sphere.

The optical mouse takes microscopic snapshots of the working surface at a rate of more than 1,000 images per second. If the mouse is moved, the image changes. The tiniest irregularities in the surface can produce images good enough for the sensor and DSP to generate usable movement data. The best surfaces reflect but scatter light; an example is a blank sheet of white drawing paper. Some surfaces do not allow the sensor and DSP to function properly because the irregularities are too small to be detected. An example of a poor optical-mousing surface is unfrosted glass.

In practice, an optical mouse does not need cleaning, because it has no moving parts. This all-electronic feature also eliminates mechanical fatigue and failure. If the device is used with the proper surface, sensing is more precise than is possible with any pointing device using the old electromechanical design. This is an asset in graphics applications, and it makes computer operation easier in general.





The Firewalls and Internet Security seminar defines three basic types of firewalls: packet filters, circuit level gateways, and application gateways. Of course there are also hybrid firewalls, which can be combinations of all three.
                                                                           Packet filter gateways are usually comprised of a series of simple checks based on the source and destination IP address and ports. They are very simple to the user since it will probably not even realize that the checks are taking place (unless of course it was denied!!). However, that simplicity is also their biggest problem: there is no way for the filter to securely distinguish one user from another. Packet filters are frequently located on routers and most major router vendors supply packet filters as part of the default distribution. You may have heard of smart packet filters.

Smart packet filters are really not very different from simple packet filters except they have the ability to interpret the data stream and understand that other connections, which would normally be denied, should be allowed (e.g. ftp's PORT command would be understood and the reverse connection allowed). Smart packet filters, however, still cannot securely distinguish one user on a machine from another. Brimstone incorporates a very smart and configurable application layer filter.

Circuit-level gateways are much like packet filters except that they operate at a different level of the OSI protocol stack. Unlike most packet filters, connections passing through a circuit-level gateway appear to the remote machine as if they originated from the firewall. This is very useful to hide information about protected networks. Socks is a popular de-facto standard for automatic circuit-level gateways. Brimstone supports both Socks and a manual circuit-level gateway.

Application gateways represent a totally different concept for firewalls. Instead of a list of simple rules which control which packets or sessions should be allowed through, a program accepts the connection, typically performs strong authentication on the user which often requires one-time passwords, and then often prompts the user for information on what host to connect to. This is, in some senses, more limited than packet-filters and circuit-level gateways since you must have a gateway program for each applications (e.g. telnet, ftp, X11, etc). However, for most environments it provides much higher security because unlike the other types of gateways, it can perform strong user authentication to ensure that the person on the other end of the IP connection is really who they say they are. Additionally, once you know who you are talking to, you can perform other types of access checks on a per-user basis such as what times they can connect, what hosts they can connect to, what services they can use, etc. Many people only consider application gateways to be true firewall, because of the lack of user authentication in the other two types. The core Brimstone ACL provides application gateway functionality.

          Hybrid gateways are ones where the above types are combined. Quite  frequently one finds an application gateway combined with a circuit-level gateways or packet filters, since it can allow internal hosts unencumbered access to unsecured networks while forcing strong security on connects from unsecured networks into the secured internal networks. Recommended Brimstone configurations are a hybrid firewall.



Definition
The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the IT industry to keep them running. And it's not just a matter of numbers. It's the complexity of these systems and the way they work together that is creating a shortage of skilled ITworkers to manage all of the systems. It's a problem that is not going away, but will grow exponentially, just as our dependence on technology has.
The solution is to build computer systems that regulate themselves much in the same way our autonomic nervous system regulates and protects our bodies. This new model of computing is called autonomic computing. The good news is that some components of this technology are already up and running. However, complete autonomic systems do not yet exist. Autonomic computing calls for a whole new area of study and a whole new way of conducting business.



The Benefits
Autonomic computing was conceived to lessen the spiraling demands for skilled IT resources, reduce complexity and to drive computing into a new era that may better exploit its potential to support higher order thinking and decision making. Immediate benefits will include reduced dependence on human intervention to maintain complex systems accompanied by a substantial decrease in costs. Long-term benefits will allow individuals, organizations and businesses to collaborate on complex problem solving.



The Problem
Within the past two decades the development of raw computing power coupled with the proliferation of computer devices has grown at exponential rates. This phenomenal growth along with the advent of the Internet have led to a new age of accessibility - to other people, other systems, and most importantly, to information. This boom has also led to unprecedented levels of complexity.

The simultaneous explosion of information and integration of technology into everyday life has brought on new demands for how people manage and maintain computer systems. Demand is already outpacing supply when it comes to managing complex, and even simple computer systems. Even in uncertain economic times, demand for skilled IT workers is expected to increase by over 100 percent in the next six years.

As access to information becomes omnipresent through PC's, hand-held and wireless devices, the stability of current infrastructure, systems, and data is at an increasingly greater risk to suffer outages and general disrepair. IBM believes that we are quickly reaching a threshold moment in the evolution of the industry's views toward computing in general and the associated infrastructure, middleware, and services that maintain them. The increasing system complexity is reaching a level beyond human ability to manage and secure.

This increasing complexity with a shortage of skilled IT professionals points towards an inevitable need to automate many of the functions associated with computing today.



The Solution
IBM's proposed solution looks at the problem from the most important perspective: the end user's. How do IT customers want computing systems to function? They want to interact with them intuitively, and they want to have to be far less involved in running them. Ideally, they would like computing systems to pretty much take care of the mundane elements of management by themselves.
The most direct inspiration for this functionality that exists today is the autonomic function of the human central nervous system. Autonomic controls use motor neurons to send indirect messages to organs at a sub-conscious level. These messages regulate temperature, breathing, and heart rate without conscious thought. The implications for computing are immediately evident; a network of organized, "smart" computing components that give us what we need, when we need it, without a conscious mental or even physical effort.
IBM has named its vision for the future of computing "autonomic computing." This new paradigm shifts the fundamental definition of the technology age from one of computing, to one defined by data. Access to data from multiple, distributed sources, in addition to traditional centralized storage devices will allow users to transparently access information when and where they need it. At the same time, this new view of computing will necessitate changing the industry's focus on processing speed and storage to one of developing distributed networks that are largely self-managing, self-diagnostic, and transparent to the user. 


Definition:


Using an ordinary phone for most people is a common daily occurrence as is listening to your favorite CD containing the digitally recorded music. It is only a small extension to these technologies in having your voice transmitted in data packets. The transmission of voice in the phone network was done originally using an analog signal but this has been replaced in much of the world by digital networks. Although many of our phones are still analog, the network that carries that voice has become digital.

In todays phone networks, the analog voice going into our analog phones is digitized as it enters the phone network. This digitization process, shown in Figure 1 below, records a sample of the loudness (voltage) of the signal at fixed intervals of time. These digital voice samples travel through the network one byte at a time.
At the destination phone line, the byte is put into a device that takes the voltage number and produces that voltage for the destination phone. Since the output signal is the same as the input signal, we can understand what was originally spoken.
The evolution of that technology is to take numbers that represent the voltage and group them together in a data packet similar to the way computers send and receive information to the Internet. Voice over IP is the technology of taking units of sampled speech data .
So at its most basic level, the concept of VoIP is straightforward. The complexity of VoIP comes in the many ways to represent the data, setting up the connection between the initiator of the call and the receiver of the call, and the types of networks that carry the call.

Using data packets to carry voice is not just done using IP packets. Although it won't be discussed, there is also voice over Frame Relay (VoFR) and Voice over ATM (VoATM) technologies. Many of the issues VoIP being discussed also apply to the other packetized voice technologies.
The increasing multimedia contents in Internet have reduced drastically the objections to putting voice on data networks. Basically, the Internet objections to putting voice on data networks. Basically, the Internet Telephony is to transmit multimedia information in discrete packets like voice or video over Internet or any other IP-based Local Area Network (LAN) or Wide Area Network (WAN). The commercial Voice Over IP (Internet Protocol) was introduced in early 1995 when VocalTec introduced its Internet telephone software. Because the technologies and the market have gradually reached their maturity, many industry leading companies have developed their products for Voice Over IP applications since 1995
VoIP, or "Voice over Internet Protocol" refers to sending voice and fax phone calls over data networks, particularly the Internet. This technology offers cost savings by making more efficient use of the existing network.

Traditionally, voice and data were carried over separate networks optimized to suit the differing characteristics of voice and data traffic. With advances in technology, it is now possible to carry voice and data over the same networks whilst still catering for the different characteristics required by voice and data.
Voice-over-Internet-Protocol (VOIP) is an emerging technology that allows telephone calls or faxes to be transported over an IP data network.

Custom Search

visitors in world