Thursday, October 02, 2008

History of radar

Several inventors, scientists, and engineers contributed to the development of radar. The first to use radio waves to detect "the presence of distant metallic objects via radio waves" was Christian Hülsmeyer,[2][3] who in 1904 demonstrated the feasibility of detecting the presence of a ship in dense fog, but not its distance. He received Reichspatent Nr. 165546 for his pre-radar device in April, and patent 169154 on November 11 for a related amendment. He also received a patent (GB13170) in England for his telemobiloscope on September 22, 1904.[2][4]
Nikola Tesla, in August 1917, first established principles regarding frequency and power level for the first primitive radar units[citation needed]. Before the Second World War, developments by the Americans (Dr. Robert M. Page tested the first monopulse radar in 1934),[5] the Germans, the French (French Patent n° 788795 in 1934),[6] and mainly the British who were the first to fully exploit it as a defence against aircraft attack (British Patent GB593017 by Robert Watson-Watt in 1935),[6][7] led to the first real radars. Hungarian Zoltán Bay produced a working model by 1936 at the Tungsram laboratory in the same vein.In 1934, Émile Girardeau, working with the first French radar systems, stated he was building radarsystems "conceived according to the principles stated by Tesla". [1]
The war precipitated research to find better resolution, more portability and more features for the new defence technology. Post-war years have seen the use of radar in fields as diverse as air traffic control, weather monitoring, astrometry and road speed control.

This long-range radar antenna, known as ALTAIR, is used to detect and track space objects in conjunction with ABM testing at the Ronald Reagan Test Site on the Kwajalein atoll.[1]
Radar is a system that uses electromagnetic waves to identify the range, altitude, direction, or speed of both moving and fixed objects such as aircraft, ships, motor vehicles, weather formations, and terrain. A transmitter emits radio waves, which are reflected by the target and detected by a receiver, typically in the same location as the transmitter. Although the radio signal returned is usually very weak, radio signals can easily be amplified. This enables a radar to detect objects at ranges where other emissions, such as sound or visible light, would be too weak to detect. Radar is used in many contexts, including meteorological detection of precipitation, air traffic control, police detection of speeding traffic, and by the military. It was originally called RDF (Radio Direction Finder) in Britain. The term RADAR was coined in 1941 as an acronym for Radio Detection and Ranging. The term has since entered the English language as a standard word, radar, losing the capitalization in the process.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Robot-assisted surgery

Robot-assisted surgery is the latest development in the larger movement of endoscopy, a type of minimally invasive surgery--the idea being that less invasive procedures translate into less trauma and pain for patients. Surgery through smaller incisions typically results in less scarring and faster recovery. It's not that robots are changing the basics of surgery. Surgeons are still cutting and sewing like they have been for decades. Robots represent a new computer-assisted tool that provides another way for surgeons to work.

Rather than cutting patients open, endoscopy allows surgeons to operate through small incisions by using an endoscope. This fiber optic instrument has a small video camera that gives doctors a magnified internal view of a surgical site on a television screen.
In abdominal endoscopy, known as laparoscopy, surgeons thread the fiber optic instrument into the abdomen. First performed in the late 1980s, laparoscopy is now routine for many procedures, such as surgery on the gallbladder and on female organs.

With robotic surgical systems, surgeons don't move endoscopic instruments directly with their hands. Instead, surgeons sit at a console several feet from the operating table and use joysticks similar to those used in video games. They perform surgical tasks by guiding the movement of the robotic arms in a process known as tele-manipulation.

MODERN ROBOTIC SURGERY SYSTEMS

GENERAL LAYOUT OF A ROBOTIC SURGICAL SYSTEM


A General Surgical System consists of the following:

· Surgeon Console
· Image processing Equipment
· Endowrist Instruments
· Surgical Arm Cart
· High Resolution 3D Endoscope

TYPES OF ROBOTIC SYSTEMS

1. AESOP ROBOTIC SYSTEM


AESOP stands for Automated Endoscopic System for Optimal Positioning. Basically, it consists of one robotic arm, which holds the endoscope in position. Foot-pedals or voice activated software allow the physician to control the endoscope as required. This system was developed by Computer motion. AESOP is the first FDA approved robot for operating room assistance (1994).

2. Da VINCI SURGICAL SYSTEM

In July 2000, the FDA cleared da Vinci as an endoscopic instrument control system for use in laparoscopic (abdominal) surgical procedures such as removal of the gallbladder and surgery for severe heartburn. In March 2001, the FDA cleared da Vinci for use in general non-cardiac thoracoscopic (inside the chest) surgical procedures--surgeries involving the lungs, esophagus, and the internal thoracic artery. This is also known as the internal mammary artery, a blood vessel inside the chest cavity. In coronary bypass surgery, surgeons detach the internal mammary artery and reroute it to a coronary artery. In June 2001, the FDA cleared da Vinci for use during laparascopic removal of the prostate (radical prostatectomy).

The da Vinci is intended to assist in the control of several endoscopic instruments, including rigid endoscopes, blunt and sharp dissectors, scissors, scalpels, and forceps. The system is cleared by the FDA to manipulate tissue by grasping, cutting, dissecting and suturing.
In use, a surgeon sits at a console several feet away from the operating table and manipulates the robot's surgical instruments. The robot has three hands attached to a free-standing cart. One arm holds a camera (endoscope) that has been passed into the patient through small openings. The surgeon operates the other two hands by inserting fingers into rings.

The arms use a technology called EndoWrist--flexible wrists that surgeons can bend and twist like human wrists. The surgeon uses hand movements and foot pedals to control the camera, adjust focus, and reposition the robotic arms. The da Vinci has a three-dimensional lens system, which magnifies the surgical field up to 15 times. Another surgeon stays beside the patient, adjusting the camera and instruments if needed.

3. ZEUS SURGICAL SYSTEM

The most exciting product to date from Computer Motion is the Zeus minimal invasive surgical robot system. It is also the one making regular headlines in the second half of 1999 and early in 2000.

Minimal invasive surgery (MIS) has been around for over a decade now. About 4 million procedures are carried out annually around the world by surgeons using long slender devices to probe, cut and repair patient tissues and organs. While MIS has led to faster recovery time for patients, surgeons find the technique physically challenging because it limits precision and dexterity, and brings on fatigue more rapidly.

In introducing the AESOP robot, Computer Motion has already improved one element of MIS, namely support and positioning of endoscopic cameras. With the Zeus system, all the instruments are robotic. The surgeon, as seen above, can sit comfortably at a master console and control the slave robotic instruments using a pair of master manipulators. The following advantages are achieved:

The fatigue factor is substantially reduced as the surgeon is seated and does not have to constantly hold onto the instruments.
T
he robotic instruments follow the surgeon's motion while filtering out tremors. With motion scaling, they can also execute micro-movements which may be humanly impossible.
With robotic instruments, the incisions needed are even smaller than with previous MIS instruments, leading to fewer traumas on the patients and hence shorter recovery times.

Zeus has a similar setup to the Da Vinci system. It includes the following:

· A computer workstation
· A video display
· Hand controls to move table mounted with Surgical Instruments
· Endoscope inserted into patient
· A set of Working Robotic Arms

Using the Zeus system, surgeons have achieved the following remarkable milestones:
In France, a series of operations have been performed on infants to repair a condition known as patent ductus arteriosis. Using the Zeus robotic system, a surgeon was able to close an open artery with only 3 incisions of 0.2" diameter each on the patient's body, in contrast to the 4-5" opening and rib cage separation which were previously necessary.
In Canada, a surgeon using the Zeus system was able to perform a bypass procedure on a beating heart, again with small incisions only rather than a split chest. The patient was able to return home the day after the operation


FUTURE SCOPE

1. TELESURGERY

Telesurgery is basically Robotic Surgery that is done across long distances. This means that the surgeon and the patient are at two different places while the surgery is carried out.
World's First Telesurgery :
On September 7, 2001 a doctor in New York removed the diseased gallbladder of a 68-year-old patient in Strasbourg, France. The surgeon used a computer with a high-speed network connection to move robotic tools in the French operating room.
Someday doctors may be able to use this technology to operate on patients in dangerous or inaccessible locations.

2. THE HEALING TOUCH

Research is being done to allow robots experience ‘Robotic Feeling’ of the patients body tissues. Sensors are being developed to send three-dimensional information of organs to tiny pins on the surgeon’s fingertips. The doctor can then feel changes in texture or the strength of his grip. This technology may be used to detect lung tumors or to work on delicate tissues.

3. VOICE CONTROLLED SURGERY

The microphone and headset that a surgeon is wearing while Voice Controlled surgery allows him to control robotic surgical instruments and cameras, as well as room lighting and other equipment, with voice commands. This frees the other operating room personnel from adjusting equipment. Voice Command is the basic tool for HERMES robotic system.

4. ROBOTIC BRAIN SURGERY

Robotic surgical tools give the doctor finer control over delicate movements and more accurate pinpointing of the diseased area in the Brain. This allows the surgery to be performed without fitting patients with a painful, cumbersome immobilizing frame that is needed for traditional brain surgery.

5. PRACTICE MAKES PERFECT

Doctors at the Surgical Planning Lab in UK can use 3D images of their patients to see the shapes of tumors and to practice surgeries. This technology is also used to track changes in the body over time, either for disease sufferers or for aging patients.
NASA is developing a robotic probe to detect the shape of brain tumors. The robot’s precise movements and the probe’s thin wires cause less damage to the brain than traditional surgery.


REFERENCES
1. Science and Technology Review, June 1998.
2. en.wikipedia.org
3. www.o-keating.com
4. SEMINAR TOPIC FROM ::

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Surface conduction Electron emitter Display (SED)

The SED technology has been developing since 1987. The flat panel display technology that employs surface conduction electron emitters for every individual display pixel can be referred to as the Surface-conduction Electron-emitter Display (SED). Though the technology differs, the basic theory that the emitted electrons can excite a phosphor coating on the display panel seems to be the bottom line for both the SED display technology and the traditional cathode ray tube (CRT) televisions.When bombarded by moderate voltages (tens of volts), the electrons tunnel across a thin slit in the surface conduction electron emitter apparatus. Some of these electrons are then scattered at the receiving pole and are accelerated towards the display surface, between the display panel and the surface conduction electron emitter apparatus, by a large voltage gradient (tens of kV) as these electrons pass the electric poles across the thin slit. These emitted electrons can then excite the phosphor coating on the display panel and the image follows.The main advantage of SED’s compared with LCD’s and CRT’s is that it can provide with a best mix of both the technologies. The SED can combine the slim form factor of LCD’s with the superior contrast ratios, exceptional response time and can give the better picture quality of the CRT’s. The SED’s also provides with more brightness, color performance, viewing angles and also consumes very less power. More over, the SED’s do not require a deflection system for the electron beam, which has in turn helped the manufacturer to create a display design, that is only few inches thick but still light enough to be hung from the wall. All the above properties has consequently helped the manufacturer to enlarge the size of the display panel just by increasing the number of electron emitters relative to the necessary number of pixels required. Canon and Toshiba are the two major companies working on SED’s. The technology is still developing and we can expect further breakthrough on the research.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

SMART DUST

The goal of the Smart Dust project is to build a self-contained, millimeter-scale sensing and communication platform for a massively distributed sensor network. This device will be around the size of a grain of sand and will contain sensors, computational ability, bi-directional wireless communications, and a power supply, while being inexpensive enough to deploy by the hundreds. The science and engineering goal of the project is to build a complete, complex system in a tiny volume using state-of-the art technologies, which will require evolutionary and revolutionary advances in integration, miniaturization, and energy management. We foresee many applications for this technology: Weather/seismological monitoring on Mars, Internal spacecraft monitoring, Land/space comm. Networks, Chemical/biological sensors, Weapons stockpile monitoring, Defense-related sensor networks, Inventory Control, Product quality monitoring, Smart office spaces, Sports - sailing, balls.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Wednesday, October 01, 2008

THE GPS Technology

THE GPS Technology

1. INTRODUCTION

Throughout time people have developed a variety of ways to figure out their position on earth and to navigate from one place to another. Early mariners relied on angular measurements to celestial bodies like sun and stars to calculate their location. The 1920s witnessed the introduction of more advanced technique-radio navigation-based at first on radios that allowed navigators to locate the direction of shore-based transmitters when in range. Later development of artificial satellites made possible the transmission of more precise, line of sight radio navigation signals and sparked a new era in navigation technology. Satellites are first used in position finding in a simple but reliable 2D Navy system called Transit. This laid the groundwork for a system that would later revolutionize navigation for ever-the Global Positioning System


The Global Positioning System (GPS) is a satellite based navigation system. The concept of GPS was introduced by the United States Department of Defense (DoD). It is in the year 1994 that the GPS was completely developed. The GPS is developed to provide continuous, highly precise positions, velocity and time information to the land, sea, air and space based users. The intent of system is to use a combination of ground stations, orbiting satellites and special receivers to provide navigation capabilities to virtually everyone, at any time, anywhere in the world, regardless of weather conditions.


2. THE GPS SEGMENTS
2.1. The Space Segment : The space segments, also known as satellite segment, consist of 24 operational satellites revolving around earth in 6 orbital planes approximately 600 GPS satellites are not geosynchronous. First satellite was launched in the year 1978. The satellites take approximately 12 hours to orbit Earth. These satellites revolve the earth in a circular pattern with an inclined orbit. Out of the 24 satellites 21 are working satellites and the remaining 3 satellites will be in standby. In the event of a satellite failure, one of the spare space vehicles can be moved in to its place using modern propulsion and guidance system. Each satellite circles the Earth twice every day at an altitude of 20,200 kilometers. At a time 5 to 8 satellites can be viewed by the user, there by ensuring worldwide coverage. The information from three satellites is needed to calculate a navigational unit’s horizontal location on Earth’s surface (2D-Reporting), but information from four satellites enables a receiver to determine its altitude (3D-Reporting). Each satellite contains a Cesium Atomic Clock and all these clocks will be synchronized and are accurate within a few nanoseconds.

2.2 THE CONTROL SEGMENT : The GPS control segment (CS), called the Operational Control System (OCS), includes all the fixed locations ground-based monitor stations located throughout the world, a Master Control Station (MCS) and the up-link transmitters. The monitor stations are simply GPS receivers that track the satellites as they pass overhead and accumulate ranging and ephemeris data from them. This information is relayed to the Master Control Station. These ground stations around the world are responsible for monitoring the flight paths of the GPS satellites and synchronizing the satellite’s onboard atomic clocks. This information is relayed to MCS where it is processed and compares the actual satellite position with the GPS computed position. The MCS receives data from the monitor stations in real time 24hrs a day, and uses that information to determine if any satellite are experiencing clock or ephemeris change and to detect malfunctions. Corrections are done and then it is uploaded to the satellites twice per day by the uplink antennae.

2.3. The User Segment The GPS user segment consists of all the GPS receivers and the user community.Initially the GPS service was available for military purpose only. But in 1980 the Government of United States made the GPS service available to civilians also.The Fig 2.3 shows a GPS receiver. GPS receivers convert signals received from space vehicles into position, velocity and time estimates .The GPS navigation set contains antennae, receiver, data processor and a display unit. The satellite signals are further processed by data processor of the navigation set to demodulate the data and then decode it to get the user’s 3D position coordinates. The GPS receivers are used for navigation, positioning, aviation, shipping, geology and other purposes.

3. WORKING
Fig 3.1: RANGING CALCULATION
Consider the case of a lightning (fig 3.1) followed by a thunder. A few seconds after seeing the lightning we hear the thunder. If we know the time taken for the sound waves to travel from the lightning place to the listener, we can calculate the distance between the listener and the lightning place. Similar principle is used in the working of GPS.

The GPS system works by determining how long it takes a radio signal transmitted from a satellite to reach a land-based receiver and then, using that time to calculate the distance between the satellite and the Earth station receiver. Radio waves travel at approximately the speed of light, 3x108 m/s. if a receiver can determine exactly when a satellite began sending a radio signal and exactly when the signal was received, then it can determine the propagation time. From propagation time, the receiver can determine the distance between it and the satellite using the mathematical relationship d = v × tWhere d = distance between satellite and receiver (meters) v = velocity (3 x 108 m/s) t = propagation time (seconds)
Time is the most important factor in the working of GPS. Time synchronization between the GPS receiver and the on-board clocks is very important. Then only the Ranging calculations can be done accurately. The satellite transmitter and the Earth station receiver produce identical synchronizing (pseudorandom) codes at exactly the same time. This time will be accurate up to a few nanoseconds. Each satellite continuously transmits its precise synchronizing code. After a synchronizing code is acquired, the receiver compares the received code with its own locally produced code to determine propagation time. The time difference multiplied by the velocity of radio signal gives the distance to satellite.

If the Earth station receiver knows the location of the single satellite and the distance the satellite is from the receiver, it knows that it must be located somewhere on an imaginary sphere centered on the satellite with a radius equal to the distance the satellite is from the receiver. If the receiver knows the location of the two satellites and their distances from the receiver, it can narrow its location to somewhere on the circle formed where the two spheres intersect as shown in Fig 3.2.

If the location and distance to a third satellite is known, a receiver can pinpoint its location to one of the two possible locations in space as shown in Fig 3.3.If the location and distance from a fourth satellite is known, the altitude or the 3D position of the Earth station can also be determined.
4. LEVELS OF SERVICES

4.1. STANDARD POSITIONING SERVICE (SPS). :It is the positioning and timing service that is made available to all GPS users (Military, Private and Commercial) on a continuous, worldwide basis. It provides a horizontal accuracy of 100m, a vertical accuracy of 156m and a 3D accuracy of 185m. SPS will be provided on GPS L1 frequency (1575.42 MHz).

4.2. PRECISE POSITIONING SERVICE (PPS).PPS is a highly accurate military positioning, velocity and timing service which will be available on a continuous, worldwide basis to users authorized by the Department of Defense. PPS will be provided on GPS L2 frequency (1227.60 MHz). Both L1 and L2 frequencies are used for high precision works. . It has a horizontal accuracy of 20m, vertical accuracy of 27m and a 3D accuracy of 35m. Cryptographic equipments are used to prevent the unauthorized use of these PPS. This is more precise than the SPS.

5. APPLICATIONS OF GPS

5.1 POSITIONING The GPS can be used to find the exact position of a person or a vehicle etc very easily. Whatever may be the whether conditions we can easily locate a person or a vehicle having the GPS receiver. The Fig 5.1 shows the positioning of a receiver in a smoky weather condition. Precise location data for any point on planet is possible using this GPS. This system is used to locate persons and vehicles when they are lost.

5.2 NAVIGATION SYSTEM : For the Navigation purposes GPS receivers are used in cars, aeroplanes, ships and even space vehicles. The Fig 5.2 shows a GPS system used in cars. The current location of the vehicle and the road maps will be displayed on an LCD screen. Thus the shortest path to the destination can be selected. This is used in Aviation purposes to know the exact location of the plane and its distance from the different ports. Automatic pilot system is based on this technology. This is also used by sailors and other cruise ships to know their position in sea. The space vehicles (SV) also use this GPS to know their current locations.

5.3 TRACKING : The path through which a person or a vehicle moves can be traced easily using this GPS. So it is used in Shipping and Aviation purposes to know the path of the vehicle. Its tracking facility is used in industrial applications to know the processes through which the product moves. The velocity of the vehicle can also be calculated.

5.4 MILIATRY PURPOSE : GPS is used in military for setting the targets and guiding the missiles. During the war period, military uses this technology to know the positions of their forces and their movements in the war region. Aeroplanes like F-16, B2 Bomber, KC-135 Arial refuelers make use of this technology. Missiles like Toma-Hawk are guided using GPS to destroy the targets. When a country uses the GPS guided missiles against an enemy, it is possible for the enemy country to locate some of the points in its trajectory by making use of its own GPS. Using these points the actual trajectory can be manipulated and location from which the missile was fired could be predicted. There is a good chance that this location would be a military base. Fig 5.4 shows GPS guided missile.

5.5 PUBLIC SAFETY : The GPS technology is used to help the people in case of emergencies. When an emergency call is made by a person, the call is automatically forwarded to a public-safety answering point (PSAP), also called an emergency call center. When the call is answered, the call center operator is provided with automatic location information (ALI), pinpointing the exact position of the call. The PSAP will give information to the nearby rescue operation team and thus makes the rescue operations fast.

5.6 TIME SYNCHRONIZATION : Many synchronization systems use GPS as a source of accurate time; hence one of the commonest applications of this use is that of GPS as a reference clock for time code generators. For instance, when deploying sensors (for seismology or other monitoring application), GPS may be used to provide each recording apparatus with some precise time source, so that the time of events may be recorded accurately. For geographically dispersed stations, the time synchronization is done using the GPS.

6. CONCLUSION
Though originally designed to help US Forces around the world to locate targets and move quickly, it is now being used across the world, from mountaineers climbing up Mount Everest to sail boats journeying in to oceans. Its functions have been extended to over positioning, tracking, mapping etc.GPS’s future seems secure. Its biggest push now is the Federal Communications Commission (FCC) enhanced 911 mandates. The new cell phones will be enabled with GPS. GPS receivers in future will be able to give accuracy up to 5mm. there is still room for improvement for GPS, as it does so, we will find it being used more and more in our daily lives, to the point where it would be hard to perform many travel and industry tasks without it.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

The SIDAC

The SIDAC, or SIlicon Diode for Alternating Current, is a semiconductor of the thyristor family. Also referred to as a SYDAC (Silicon thYristor for Alternating Current), bi-directional thyristor breakover diode, or more simply a bi-directional thyristor diode, it is technically specified as a bilateral voltage triggered switch. Its operation is identical to that of the DIAC; the distinction in naming between the two devices being subject to the particular manufacturer. In general, SIDACs have higher breakover voltages and current handling capacities than DIACs. The operation of the SIDAC is quite simple and is functionally identical to that of a spark gap or similar to two inverse parallel Zener diodes. The SIDAC remains nonconducting until the applied voltage meets or exceeds its rated breakover voltage. Once entering this conductive state, the SIDAC continues to conduct, regardless of voltage, until the applied current falls below its rated holding current. At this point, the SIDAC returns to its initial nonconductive state to begin the cycle once again. Somewhat uncommon in most electronics, the SIDAC is relegated to the status of a special purpose device. However, where part-counts are to be kept low, simple relaxation oscillators are needed, and the voltages are too low for practical operation of a spark gap, the SIDAC is an indispensable component.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Trisil

Trisil is an electronic component designed to protect electronic circuits against overvoltage. Unlike a Transil it acts as a crowbar device, switching on when the voltage on it exceeds its breakover voltage. A Trisil is bipolar, behaving the same way in both directions. It is principally a voltage-controlled triac without gate. In 1982, the only manufacturer was Thomson SA. This type of crowbar protector is widely used for protecting telecom equipment from lightning induced transients and induced currents from power lines. Other manufacturers of this type of device include Bourns and Littelfuse. Rather than using the natural breakdown voltage of the device, an extra region is fabricated within the device to form a zener diode. This allows a much tighter control of the breakdown voltage. It is also possible to make gated versions of this type of protector. In this case, the gate is connected to the telecom circuit power supply (via a diode or transistor) so that the device will crowbar if the transient exceeds the power supply voltage. The main advantage of this configuration is that the protection voltage tracks the power supply, so eliminating the problem of selecting a particular breakdown voltage for the protection circuit.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Wibro

Designed and integrated by the Korean telecom industry as an answer to the drawbacks of speed curbs in the likes of CDMA 1x mobile phones and to increase the flow rate of broadband Internet like the ADSL or Wireless LAN, the technology uses TDD for duplexing, OFDMA for multiple access and 8.75MHz as a channel bandwidth.
As the wibro base stations, provides a dataflow rate of 30 to 50 Mbit/s and also allow the usage of portable internet with in arrange of 1 – 5 Km, obviously the data flow rate, of devices in motion can be in a range of 120 km/hr and about 250 km/hr for wireless lan’s having a slow speed and for mobile phones. This figures were actually higher when compared with the range and bandwidth it offered during the testing of this technology in connection with the APEC summit in Busan in 2005. The main advantage this technology has over the WIMAX standard is its Quality of service (QoS). This QoS gives more reliability for the streaming video content and for other loss-sensitive data. WiBro is quite demanding, in its needs varying from the spectrum use to equipment design, WiMAX leaves much of this up to the equipment provider while supplying enough information to confirm the interoperability between designs.
In Korea, the government by 2001 recognized the advent of this innovative technology by giving a 100 MHz of electromagnetic spectrum in the 2.3 - 2.4 GHz band. By the end of 2004, WiBro Phase 1 was standardized by the TTA of Korea and in late 2005, ITU reflected WiBro as IEEE 802.16e. By June 2006, two major telecom companies in Korea namely the KT and the SKT Two Korean Telco (KT, SKT) began the commercial operations in the country, starting with a charge rate of 30 US$.
Since then, many telecom giants around the world namely the TI (Italia), TVA (Brazil), Omnivision (Venezuela), PORTUS (Croatia), and Arialink (Michigan) have started plans to come out with the commercial operations of the technology.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Plasma Television

Television has been around since 19th century and for the past 50 years it held a pretty common place in our leaving room. Since the invention of television engineers have been striving to produce slim & flat displays that would deliver as good or even better images than the bulky C.R.T. Scores of research teams all over the world have been working to achieve this. Plasma television has achieved this goal. Technologies inside it are plasma and hi-definition which are just two of the latest technologies to hit stores. The main contenders in the flat race are PDP (Plasma Display Panel) and flat CRT with LCD and FED (Field Emission Display) To get an idea of what makes a plasma display different it needs to understand how a conventional TV set works. Conventional TV’s used CRT to create the images we see on the screen. The cathode is a heated filament, like the one in a light bulb. It is housed inside a vacuum created in a tube of thick glass….that is what makes your TV so big and heavy. The newest entrant in the field of flat panel display systems is Plasma display. Plasma display panels don’t contain cathode ray tubes and pixels are activated differently.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

OFDMA

Orthogonal Frequency Division Multiple Access (OFDMA) is a multiple access scheme for OFDM systems. It works by assigning a subset of subcarriers to individual users.OFDMA featuresOFDMA is the 'multi-user' version of OFDM Functions by partitioning the resources in the time-frequency space, by assigning units along the OFDM signal index and OFDM sub-carrier index Each OFDMA user transmits symbols using sub-carriers that remain orthogonal to those of other users More than one sub-carrier can be assigned to one user to support high rate applications Allows simultaneous transmission from several users ⇒ better spectral efficiency Multiuser interference is introduced if there is frequency synchronization error The term 'OFDMA' is claimed to be a registered trademark by Runcom Technologies Ltd., with various other claimants to the underlying technologies through patents. It is used in the mobility mode of IEEE 802.16 Wireless MAN Air Interface standard, commonly referred to as WiMAX.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Native Command Queuing (NCQ)

Native Command Queuing (NCQ) is a technology designed to increase performance of SATAhard disks by allowing the individual hard disk to receive more than one I/O request at a time and decide which to complete first. Using detailed knowledge of its own seek times and rotational position, the drive can compute the best order to perform the operations. This can reduce the amount of unnecessary seeking (going back-and-forth) of the drive's heads, resulting in increased performance (and slightly decreased wear of the drive) for workloads where multiple simultaneous read/write requests are outstanding, most often occurring in server-type applications.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

MICRO-ELECTRO-MECHANICAL SYSTEMS

Micro-Electro-Mechanical Systems (MEMS) is the integration of mechanical elements, sensors, actuators, and electronics on a common silicon substrate through microfabrication technology. While the electronics are fabricated using integrated circuit (IC) process sequences (e.g., CMOS, Bipolar, or BICMOS processes), the micromechanical components are fabricated using compatible "micromachining" processes that selectively etch away parts of the silicon wafer or add new structural layers to form the mechanical and electromechanical devices.




MEMS promises to revolutionize nearly every product category by bringing together silicon-based microelectronics with micromachining technology, making possible the realization of complete systems-on-a-chip. MEMS is an enabling technology allowing the development of smart products, augmenting the computational ability of microelectronics with the perception and control capabilities of microsensors and microactuators and expanding the space of possible designs and applications.


Microelectronic integrated circuits can be thought of as the "brains" of a system and MEMS augments this decision-making capability with "eyes" and "arms", to allow microsystems to sense and control the environment. Sensors gather information from the environment through measuring mechanical, thermal, biological, chemical, optical, and magnetic phenomena. The electronics then process the information derived from the sensors and through some decision making capability direct the actuators to respond by moving, positioning, regulating, pumping, and filtering, thereby controlling the environment for some desired outcome or purpose. Because MEMS devices are manufactured using batch fabrication techniques similar to those used for integrated circuits, unprecedented levels of functionality, reliability, and sophistication can be placed on a small silicon chip at a relatively low cost
MEMS and Nano devices are extremely small -- for example, MEMS and Nanotechnology has made possible electrically-driven motors smaller than the diameter of a human hair (right) -- but MEMS and Nanotechnology is not primarily about size.
MEMS and Nanotechnology is also not about making things out of silicon, even though silicon possesses excellent materials properties, which make it an attractive choice for many high-performance mechanical applications; for example, the strength-to-weight ratio for silicon is higher than many other engineering materials which allows very high-bandwidth mechanical devices to be realized.

Instead, the deep insight of MEMS and Nano is as a new manufacturing technology, a way of making complex electromechanical systems using batch fabrication techniques similar to those used for integrated circuits, and uniting these electromechanical elements together with electronics


First, MEMS and Nanotechnology are extremely diverse technologies that could significantly affect every category of commercial and military product. MEMS and Nanotechnology are already used for tasks ranging from in-dwelling blood pressure monitoring to active suspension systems for automobiles.


Second, MEMS and Nanotechnology blurs the distinction between complex mechanical systems and integrated circuit electronics. Historically, sensors and actuators are the most costly and unreliable part of a macroscale sensor-actuator-electronics system. MEMS and Nanotechnology allows these complex electromechanical systems to be manufactured using batch fabrication techniques, decreasing the cost and increasing the reliability of the sensors and actuators to equal those of integrated circuits.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

MAC address

In computer networking a Media Access Control address (MAC address) is a unique identifier attached to most forms of networking equipment. Most layer 2 network protocols use one of three numbering spaces managed by the IEEE: MAC-48, EUI-48, and EUI-64, which are designed to be globally unique. Not all communications protocols use MAC addresses, and not all protocols require globally unique identifiers. The IEEE claims trademarks on the names 'EUI-48' and 'EUI-64'. (The 'EUI' stands for Extended Unique Identifier.)
ARP/RARP is commonly used to map the layer 2 MAC address to an address in a layer 3 protocol such as Internet Protocol (IP). On broadcast networks such as Ethernet the MAC address allows each host to be uniquely identified and allows frames to be marked for specific hosts. It thus forms the basis of most of the layer 2 networking upon which higher OSI Layer protocols are built to produce complex, functioning networks.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

HyperTransport (HT)

HyperTransport (HT), formerly known as Lightning Data Transport (LDT), is a bidirectional serial/parallel high-bandwidth, low-latency computer bus that was introduced on April 2, 2001[1]. The HyperTransport Technology Consortium is in charge of promoting and developing HyperTransport technology. The technology is used by AMD and Transmeta in x86 processors, PMC-Sierra, Broadcom, and Raza Microelectronics in MIPS microprocessors, ATI Technologies, NVIDIA, VIA, SiS, ULi/ALi, AMD, Apple Computer and HP in PC chipsets, HP, Sun Microsystems, IBM, and IWill in servers, Cray, Newisys, and PathScale in high performance computing, and Cisco Systems in routers. Notably missing from this list is semiconductor giant Intel, which continues to use a shared bus architecture

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Hydropho

A hydrophone is a sound-to-electricity transducer for use in water or other liquids, analogous to a microphone for air. Note that a hydrophone can sometimes also serve as a projector (emitter), but not all hydrophones have this capability, and may be destroyed if used in such a manner.The first device to be called a 'hydrophone' was developed when the technology matured, and used ultrasonic waves, which would provide for higher overall acoustic output, as well as increasing detection. The ultrasonic waves were produced by a mosaic of thin quartz crystals glued between two steel plates, having a resonant frequency of about 150 kHz. Contemporary hydrophones more often use barium titanate, a piezoelectric ceramic material, giving higher sensitivity than quartz. Hydrophones are an important part of the SONAR system used to detect submarines by both surface vessels and other submarines. A large number of hydrophones were used in the building of various fixed location detection networks such as SOSUS.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Portable Fuel Cell

Portable fuel cell


Commercialisation –catalysing the industry. Electricity is becoming as much a problem in the 21st century as it was a solution in the 19th and 20th centuries. Global demand for electrical energy is increasing, in areas as diverse as mobile phones and household lighting, driven by the basic power needs of emerging economies and ever increasing consumer demand for electronics. We are using more power than ever, and our existing sources are starting to feel the strain. Fuel cells should be the answer to some of these power problems, providing effectively limitless run-times at a reasonable price. Recent technological advances have brought this 1830s power technology into the spotlight once more with reduced costs, better performance and higher reliability.

As a result, the number of UK quoted fuel cell companies has increased rapidly in the past twelve months, with distinct groups emerging around domestic power, portable electronics, stationary and backup power. Investors can now gain exposure to companies delivering fuel cells for specific consumer markets, as well as firms providing key materials across a range of fuel cell markets.

Portable fuel cells are widely considered the technology closest to commercialisation, owing to a combination of consumer demand and technological readiness. This seminar is intended to draw the attention of investors to opportunities in this market, being direct investments in fuel cell firms and in the broader supply-chain.

We trust you will find this event useful and welcome your feedback. Investment opportunities
With rising demand for micropower sources and fuel cell technology transitioning to commercially viable products, as opposed to scientific curiosities and technology “demonstrators”, we expect significant revenue growth in this market for the period between 2007 and 2010. As a result, the investor universe itself has vastly expanded beyond SRI, clean-tech and alternative energy funds to now include generalist small cap and hedge funds. Investors can gain exposure to this high growth area by investing directly in the enabling technology companies or across the broader value-chain: upstream in raw material and fuel suppliers and downstream in manufacturers of portable
electronics, or wireless service providers.


There are several reasons to invest in the portable fuel cell industry:
• Portable electronics markets are large and growing rapidly and the number of features on these devices are increasing day by day
• The “run-time gap” is a significant problem
• Strong support for fuel cells in the area of portable electronics
• Portable fuel cells could catalyse the rest of the fuel cell industry

Portable device markets: The markets for portable electronics are large and growing rapidly. Collins Stewart estimates that the combined military, industrial and consumer portable device markets was close to one billion units in 2005. Mobile handsets are the largest market constituent, reaching 815 million units globally in 2005 (an increase of 21% over the previous year) and forecasts from Strategy Analytics suggest handset sales will exceed 1 billion units by 2007. High-end “powereater”

3G and convergence handset segments are a growing proportion of the whole, with Nokia recently sizing the latter market at 100 million units in 2006 and over 250 million in 2008. In Japan the 3G-phone market share increased from 10% to 36% in only three years; indeed, recent reports suggest that more than 80% of the 3.5 million phones shipped in Japan during the month of January were 3G handsets.

The laptop market offers similar opportunities for fuel cell companies, with approximately 65 million units sold worldwide in 2005, circa 30% growth over 2004. With the trend towards mobility gathering pace, this double digit laptop sales growth is projected to continue for several years. Business users, considered the consumer group most willing to pay for
extended battery life, own approximately 60% of the world’s laptops.

Besides laptops and mobile handsets, there are numerous applications in the medical,
military and industrial electronics markets, where price-sensitivity is less of an issue and the need for long run-times are compelling. The “run-time gap” Consumers always want portable electronics to be compact and lightweight, but they also expect manufacturers to pack in plenty of new power-hungry features such as video-conferencing, wireless communications, camera flashes and MP3 players. This causes a considerable problem for device designers, who struggle to supply sufficient electrical power and energy without increasing the size of the batteries or device.

Batteries have thus become the number one issue for manufacturers of portable electronics. Although adequate from a power perspective, contemporary battery technology has reached a ceiling in terms of the energy they can store in the available space. A gap is emerging between energy supply and energy demand.

As a result, manufacturers must either hold back on included features or dramatically compromise run run-time, a core consumer requirement. The “run-time gap” has become a significant problem and both consumers and manufacturers are willing to pay for a viable solution.


Portable fuel cells: The support for fuel cells is arguably strongest in the area of portable electronics, where manufacturers, consumers and after-market service providers would all benefit from an improved energy solution. Portable fuel cells are intended for use in consumer, industrial and military electronics, complementing batteries in a hybrid solution to give a longer-lasting energy supply. OEM’s anecdotally project adoption rates at between 10% and 30% of portable device purchasers, somewhat biased towards military, business and industrial users at first.

Portable devices are best served by liquid fuels such as methanol – cheap, convenient to use and store; and produced in volume. In recent years, direct methanol fuel cell (DMFC) technology has dominated the development programmes of the major OEMs. Regulatory support is evident, with safety standards set for micro-fuel cells and provisions in place to allow air passengers to carry methanol fuel cartridges from January 2007.

As with portable electronics in general, portable fuel cells will benefit from rapid innovation. Fast-moving consumer goods, such as laptops and mobile phones, have a relatively short shelf life, enjoy rapid development cycles and thus allow manufacturers to reach operating scale economies rapidly. The relatively low cost will encourage consumers to experiment with fuel cells and drive adoption.

Catalysing the industry: With portable devices likely to reach production scale before other fuel cell technologies, these fuel cells are likely to set the standards and drive availability of fuels for other applications, while consumer acceptance of fuel-cell technology will be bolstered by the rapid innovation and blue chip companies involved.

It is fair to say that most fuel cell companies are still in a product development and testing phase – even those in the portable fuel cell group – and significant challenges remain in bringing a new technology to the mass market. However, many OEMs are targeting the 2007/8 timeframe for commercial product launches and fuel cell companies are arranging their affairs accordingly. The next two years will see many newsworthy developments.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Free space laser communication

Lasers have been considered for space communication since their realization in 1960. It was soon recognized that, although the laser had potential for the transfer data at potentially high rates.
Features of laser communication
Extremely high bandwidth and large information throughput available many times greater than RF communication. Modulation of helium-Neon laser (frequency 4.7 x 1014) results in a channel bandwidth of 4700 GHz, which is enough to carry a million simultaneous TV channel.
Small antenna size requires only a small increase in weight and volume of the satellite. This reduces blockage of fields of view of most desirable areas on satellites. Laser satellite communication equipment can provide advantages of 3:1 in mass and 2:1 in power relative to microwave systemsNarrow beam divergence affords interference free and secure operation. The existence of laser beams cannot be detected with spectrum analyzers. Antenna gain made possible by narrow beam, enables small telescope aperture to be used. The 1550nm-wavelength technology has added the advantage of being inherently eye-safe at the power levels used in the free space systems, alleviating the health and safety concerns often raised with using lasers in an open environment where human exposure is possible.Laser technology can meet the needs of a variety of space missions, including intersatellite links, Earth to near-space links, and deep space missions. The vast distances to deep space make data return via conventional radio frequency techniques extremely difficult.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

FireWire

FireWire (also known as i.Link or IEEE 1394) is a personal computer (and digital audio/digital video) serial bus interface standard, offering high-speed communications and isochronous real-time data services. FireWire has replaced Parallel SCSI in many applications due to lower implementation costs and a simplified, more adaptable cabling system.
Almost all modern digital camcorders have included this connection since 1995. Many computers intended for home or professional audio/video use have built-in FireWire ports including all Macintosh, Dell and Sony computers currently produced. FireWire was also an attractive feature on the Apple iPod for several years, permitting new tracks to be uploaded in a few seconds and also for the battery to be recharged concurrently with one cable. However, Apple has eliminated FireWire support in favor of Universal Serial Bus (USB) 2.0 on its newer iPods due to space constraints and for wider compatibility.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

DNA Based Computing

Biological and mathematical operations have some similarities, despite their respective complexities:

1. The very complex structure of a living being is the result of applying simple operations to initial information encoded in a DNA sequence;
2. The result f(w) of applying a computable function to an argument w can be obtained by applying a combination of basic simple functions to w.

For the same reasons that DNA was presumably selected for living organisms as a genetic material, its stability and predictability in reactions, DNA strings can also be used to encode information for mathematical systems.

To solve the Hamiltonian Path problem, the objective is to find a path from start to end going through all the points only once. This problem is difficult for conventional computers to solve because it is a 'non-deterministic polynomial time problem' (NP). NP problems are intractable with deterministic (conventional/serial) computers, but can be solved using non-deterministic (massively parallel) computers. A DNA computer is a type of non-deterministic computer. Dr. Leonard Adleman (1994) was struck with the idea of using sequences of stored nucleotides (Adenine (A), Guanine (G), Cytosine (C), Thymine (T)) in molecules of DNA to store computer instructions and data in place of the sequences of electrical, magnetic or optical on-off states (0, 1 – Boolean Logic) used in today’s computers. The Hamiltonian Path problem was chosen because it is known as 'NP-complete'; every NP problem can be reduced to a Hamiltonian Path problem.

The following algorithm solves the Hamiltonian Path problem:
1. Generate random paths through the graph.
2. Keep only those paths that begin with the start city (A) and conclude with the end city (G).
3. If the graph has n cities, keep only those paths with n cities. (n = 7)
4. Keep only those paths that enter all cities at least once.
5. Any remaining paths are solutions.
Unrestricted model of DNA computing is the key to solve the problem in five steps in the above algorithm. These operations can be used to 'program' a DNA computer.
o Synthesis of a desired strand
o Separation of strands by length
o Merging: pour two test tubes into one to perform union
o Extraction: extract those strands containing a given pattern
o Melting/Annealing: break/bond two ssDNA molecules with complementary sequences
o Amplification: use PCR to make copies of DNA strands
o Cutting: cut DNA with restriction enzymes
o Ligation: Ligate DNA strands with complementary sticky ends using ligase
o Detection: Confirm presence/absence of DNA in a given test tube
Since Adleman's original experiment, several methods to reduce error and improve efficiency have been developed. The Restricted model of DNA computing solves several physical problems with the Unrestricted model. The Restricted model simplifies the physical obstructions in exchange for some additional logical considerations. The purpose of this restructuring is to simplify biochemical operations and reduce the errors due to physical obstructions.

The Restricted model of DNA computing:
o Separate: isolate a subset of DNA from a sample
o Merging: pour two test tubes into one to perform union
o Detection: Confirm presence/absence of DNA in a given test tube
Despite these restrictions, this model can still solve NP-complete problems such as the 3-colourability problem, which decides if a map can be coloured with three colours in such a way that no two adjacent territories have the same colour. Error control is achieved mainly through logical operations, such as running all DNA samples showing positive results a second time to reduce false positives. Some molecular proposals, such as using DNA with a peptide backbone for stability, have also been recommended.
DNA computing brings great optimism to revolutionize the computer industry in the use of molecules of DNA in a computer, in place of electronics, circuits and magnetic or optical storage media. Obviously, to perform one calculation at a time (serial logic), DNA computers are not a viable option. However, if one wanted to perform many calculations simultaneously (parallel logic), a computer such as the one described above can easily perform 1014 million instructions per second (MIPS). DNA computers also require less energy and space. In DNA computers data are entered and coded into DNA by chemical reactions and retrieved by synthesizing a key data and make them react with existing strands. Here the key DNA will stick to the required DNA strands containing data.

In short, in a DNA computer, the input and output are both strands of DNA. Furthermore, a computer in which the strands are attached to the surface of a chip (DNA chip) can now solve difficult problems quite quickly.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Direct sequence code division multiple access (DS-CDMA)

The ordinary CDMA technology has the disadvantage of identifying the users using the appropriate signature of each user. Since many users tend to use the transmission media, there is always a chance for superimposition of signals that can cause interference in network.The solution for this inconvenience comes in the form of interleaving which separates the users. This special form of CDMA technology offers some of the features of CDMA such as the dynamic channel sharing, mitigation of cross cell reference, asynchronous transmission, ease of cell planning and robustness against fading, apart from the low cost interference cancellation technique available for systems with large number of users in multi path channels. Available with the second and third generation mobile phones, the cost per user of this algorithm is independent of the number of users. Giving a better performance along with the simplicity in usage it can maintain it’s low complexity and high performance even in multi path situations too.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Differential signaling

Differential signaling is a method of transmitting information over pairs of wires (as opposed to single-ended signalling, which transmits information over single wires).
Differential signaling reduces the noise on a connection by rejecting common-mode interference. Two wires (referred to here as A and B) are routed in parallel, and sometimes twisted together, so that they will receive the same interference. One wire carries the signal, and the other wire carries the inverse of the signal, so that the sum of the voltages on the two wires is always constant.
At the end of the connection, instead of reading a single signal, the receiving device reads the difference between the two signals. Since the receiver ignores the wires' voltages with respect to ground, small changes in ground potential between transmitter and receiver do not affect the receiver's ability to detect the signal. Also, the system is immune to most types of electrical interference, since any disturbance that lowers the voltage level on A will also lower it on B.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Direct to Home Television (DTH)

Direct to home (DTH) television is a wireless system for delivering television programs directly to the viewer’s house. In DTH television, the broadcast signals are transmitted from satellites orbiting the Earth to the viewer’s house. Each satellite is located approximately 35,700 km above the Earth in geosynchronous orbit. These satellites receive the signals from the broadcast stations located on Earth and rebroadcast them to the Earth.
The viewer’s dish picks up the signal from the satellite and passes it on to the receiver located inside the viewer’s house. The receiver processes the signal and passes it on to the television.
The DTH provides more than 200 television channels with excellent quality of reception along with teleshopping, fax and internet facilities. DTH television is used in millions of homes across United States, Europe and South East Asia.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

ARTIFICIAL EYE

The retina is a thin layer of neural tissue that lines the back wall inside the eye. Some of these cells act to receive light, while others interpret the information and send messages to the brain through the optic nerve. This is part of the process that enables us to see. In damaged or dysfunctional retina, the photoreceptors stop working, causing blindness. By some estimates, there are more than 10 million people worldwide affected by retinal diseases that lead to loss of vision.

The absence of effective therapeutic remedies for retinitis pigmentosa (RP) and age-related macular degeneration (AMD) has motivated the development of experimental strategies to restore some degree of visual function to affected patients. Because the remaining retinal layers are anatomically spared, several approaches have been designed to artificially activate this residual retina and thereby the visual system.

At present, two general strategies have been pursued. The “Epiretinal” approach involves a semiconductor-based device placed above the retina, close to or in contact with the nerve fiber layer retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant. The “Sub retinal” approach involves the electrical stimulation of the inner retina from the sub retinal space by implantation of a semiconductor-based micro photodiode array (MPA) into this location. The concept of the sub retinal approach is that electrical charge generated by the MPA in response to a light stimulus may be used to artificially alter the membrane potential of neurons in the remaining retinal layers in a manner to produce formed images.

Some researchers have developed an implant system where a video camera captures images, a chip processes the images, and an electrode array transmits the images to the brain. It’s called Cortical Implants.


The human visual system is remarkable instrument. It features two mobile acquisition units each has formidable preprocessing circuitry placed at a remote location from the central processing system (brain). Its primary task include transmitting images with a viewing angle of at least 140deg and resolution of 1 arc min over a limited capacity carrier, the million or so fibers in each optic nerve through these fibers the signals are passed to the so called higher visual cortex of the brain.

The nerve system can achieve this type of high volume data transfer by confining such capability to just part of the retina surface, whereas the center of the retina has a 1:1 ration between the photoreceptors and the transmitting elements, the far periphery has a ratio of 300:1. This results in gradual shift in resolution and other system parameters.
At the brain’s highest level the visual cortex an impressive array of feature extraction mechanisms can rapidly adjust the eye’s position to sudden movements in the peripherals filed of objects too small to se when stationary. The visual system can resolve spatial depth differences by combining signals from both eyes with a precision less than one tenth the size of a single photoreceptor.


FIG 2.1 BLOCK DIAGRAM OF VISUAL SYSTEM

2.1 The Eye

The main part in our visual system is the eye. Our ability to see is the result of a process very similar to that of a camera. A camera needs a lens and a film to produce an image. In the same way, the eyeball needs a lens (cornea, crystalline lens, vitreous) to refract, or focus the light and a film (retina) on which to focus the rays. The retina represents the film in our camera. It captures the image and sends it to the brain to be developed.

The macula is the highly sensitive area of the retina. The macula is responsible for our critical focusing vision. It is the part of the retina most used. We use our macula to read or to stare intently at an object. About 130 million photoreceptors in the outermost layer (as seen from the center of the eye) of the transparent retina transform local intensity and color patterns into chemical and electrical signals which trigger activity of the many different retinal cells: horizontal cells, bipolar cells, amacrine cells, and ganglion cells.

The information is processed by astonishing amounts of serial and parallel pathways by in parts still unknown mechanisms. The information of these 130 million photoreceptors is compressed to the level of 1 million highly specialized GC-fibers. These 1 million fibers in the retina then form the optic nerve and transmit visual information to the visual cortex and its various areas in the back of the brain.
The area of the retina that receives and processes the detailed images—and then sends them via the optic nerve to the brain—is referred to as the macula. The macula is of significant importance in that this area provides the highest resolution for the images we see. The macula is comprised of multiple layers of cells which process the initial “analog” light energy entering the eye into “digital” electro-chemical impulses. The retina is the innermost layer of the wall of the eyeball. Millions of light-sensitive cells there absorb light rays and convert them to electrical signals. The signals are sent through the optic nerve to the brain, where they are interpreted as vision.


2.2 Retina

Light first enters the optic (or nerve) fiber layer and the ganglion cell layer, under which most of the nourishing blood vessels of the retina are located. This is where the nerves begin, picking up the impulses from the retina and transmitting them to the brain.


The light is received by photoreceptor cells called rods (responsible for peripheral and dim light vision) and cones (providing central, bright light, fine detail, and color vision). The photoreceptors convert light into nerve impulses, which are then processed by the retina and sent through nerve fibers to the brain. The nerve fibers exit the eyeball at the optic disk and reach the brain through the optic nerve. Directly beneath the photoreceptor cells is a single layer of retinal pigment epithelium (RPE) cells, which nourish the photoreceptors. These cells are fed by the blood vessels in the choroids.


2.3 Retinal Diseases

There are two important types of retinal degenerative disease:

· Retinitis pigmentosa (RP), and
· Age-related macular degeneration (AMD)
They are detailed below.

RETINITIS PIGMENTOSA (RP) is a general term for a number of diseases that predominately affect the photoreceptor layer or “light sensing” cells of the retina. These diseases are usually hereditary and affect individuals earlier in life. Injury to the photoreceptor cell layer, in particular, reduces the retina’s ability to sense an initial light signal. Despite this damage, however, the remainder of the retinal processing cells in other layers usually continues to function.RP affects the mid-peripheral vision first and sometimes progresses to affect the far-periphery and the central areas of vision. The narrowing of the field of vision into “tunnel vision” can sometimes result in complete blindness.

AGE-RELATED MACULAR DEGENERATION (AMD) refers to a degenerative condition that occurs most frequently in the elderly.AMD is a disease that progressively decreases the function of specific cellular layers of the retina’s macula. The affected areas within the macula are the outer retina and inner retina photoreceptor layer.
Patients with macular degeneration experience a loss of their central vision, which affects their ability to read and perform visually demanding tasks. Although macular degeneration is associated with aging, the exact cause is still unknown.
Together, AMD and RP affect at least 30 million people in the world. They are the most common causes of untreatable blindness in developed countries and, currently, there is no effective means of restoring vision.
Chapter 3 Ocular Implants
Ocular implants are those which are placed inside the retina. It aims at the electrical excitation of two dimensional layers of neurons within partly degenerated retinas for restoring vision in blind people. The implantation can be done using standard techniques from ophthalmic surgery. Neural signals farther down the pathway are processed and modified in ways not really understood therefore the earlier the electronic input is fed into the nerves the better.
There are two types of ocular implants are there epi-retinal implants and sub retinal implants.

Fig 3.1shows the major difference between epi-retinal &sub retinal approach.

3.1 Epi-Retinal Implants.

In the EPI-RET approach scientists had developed a micro contact array which is mounted onto the retinal surface to stimulate retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant.


A tiny video camera is mounted on eyeglasses and it sends images via radio waves to the chip. The actual visual world is captured by a highly miniaturized CMOS camera embedded into regular spectacles. The camera signal is analyzed and processed using receptive field algorithms to calculate electric pulse trains which are necessary to adequately stimulate ganglion cells in the retina.
This signal together with the energy supply is transmitted wireless into a device which is implanted into the eye of the blind subject. The implant consists of a receiver for data and energy, a decoder and array microelectrodes placed on the inner surface of the retina. This micro chip will stimulate viable retinal cells. Electrodes on microchip will then create a pixel of light on the retina, which can be sent to the brain for processing.

The main advantage of this is that it consists of only a simple spectacle frame with camera and external electronics Communicates wirelessly with microchip implanted on retina programmed with stimulation pattern. RET Concept:

The issues involved in the design of the retinal encoder are:
· Chip Development
· Biocompatibility
· RF telemetry and Power systems

3.1.1 Chip Development EPI RETINAL ENCODER

The design of an epiretinal encoder is more complicated than the sub retinal encoder, because it has to feed the ganglion cells. Here, a retina encoder (RE) outside the eye replaces the information processing of the retina. A retina stimulator (RS), implanted adjacent to the retinal ganglion cell layer at the retinal 'output', contacts a sufficient number of retinal ganglion cells/fibers for electrical stimulation. A wireless (Radio Frequency) signal- and energy transmission system provides the communication between RE and RS. The RE, then, maps visual patterns onto impulse sequences for a number of contacted ganglion cells by means of adaptive dynamic spatial filters. This is done by a digital signal processor, which, handles the incoming light stimuli with the master processor, implements various adaptive, antagonistic, receptive field filters with the other four parallel processors, and generates asynchronous pulse trains for each simulated ganglion cell output individually. These spatial filters as biology-inspired neural networks can be 'tuned' to various spatial and temporal receptive field properties of ganglion cells in the primate retina.

3.1.2 Biocompatibility

The material used for the chips and stimulating electrodes should satisfy a variety of criteria’s. They must be corrosion-proof, i.e. bio stable.
· The electrodes must establish a good contact to the nerve cells within fluids, so that the stimulating electric current can pass from the photo elements into the tissue.
· It must be possible to manufacture these materials with micro technical methods and.
· They must be biologically compatible with the nervous system.

3.1.3 RF Telemetry

In case of the epiretinal encoder, a wireless RF telemetry system acts as a channel between the Retinal Encoder and the retinal stimulator. Standard semiconductor technology is used to fabricate a power and signum receiving chip, which drives current through an electrode array and stimulate the retinal neurons. The intraocular transceiver processing unit is separated from the stimulator in order to take into account the heat dissipation of the rectification and power transfer processes. Care is taken to avoid direct contact of heat dissipating devices with the retina.

Enhancement System (LVES) developed at Johns Hopkins. LVES- a video magnification system for people with low vision. This Can zoom from 2 inches to infinity. Can magnify 9x at distance and 25x near.
Visually impaired people must have a system customized to their own visual deficiencies. But it will be available only after 2010. Low vision described was no better vision than 20/40 when corrected.

3.2 SUB RETINAL IMPLANTATION

The subretinal approach is based on the fact that for instance of retinitis pigmentosa; the neuronal network in the inner retina is preserved with a relatively intact morphology. Thus, it is appropriate for excitation by extrinsically applied electrical current instead of intrinsically delivered photoelectric excitation via photoreceptors. This option requires that basic features of visual scenes such as points, bars, edges, etc. can be fed into the retinal network by electrical stimulation of individual sites of the distal retina with a set of individual electrodes.


Subretinal approach is aiming at a direct physical replacement of degenerated photoreceptors in the human eye, the basic function of which is very similar to that of solar cells, namely delivering slow potential changes upon illumination. The quantum efficiency of photoreceptor action, however, is 1000 times larger than that of the corresponding technical de-vices. Therefore the intriguingly simple approach of replacing degenerated photoreceptors
by artificial solar cell arrays has to overcome some difficulties, especially the energy supply for successful retina stimulation.

On the ‘back’ side of the retina, photoreceptors (rods and cones) are excited by the incoming light and deliver gradual potential changes to the inner retina layers. The path of the electrical signals is then opposite to that of the incoming light. The main problem in diseases like retinitis pigmentosa or macula degeneration is the loss of photoreceptors or photoreceptor function, whereas the signal processing path in the inner retina is remaining intact. This gives us the chance to place a micro photo diode array (MPDA) in the subretinal space, which may then electrically stimulate remaining photoreceptor or bipolar cells. Appropriate surgical techniques have recently been developed and tested.

It’s believed that the so evoked retinal activity leads to useful sensations if the retinal output reveals the topography of the image feature and is projected retinotopically correct to the visual cortex.

In addition, the sampling density of a sub retinal device could be designed to match that of the remaining photoreceptor or bipolar cell matrix, thereby providing a potentially high-resolution input to the retina.

Implant chips have been tested both in vitro and in vivo to assess their bio-stability. In vitro stability (in buffered saline solution) is excellent even for periods as long as 2 years. In vivo, however, the passivation layer could withstand the biological environment for up to about six months only. In contrast, the electrodes made of titanium nitride showed excellent biostability over more than 18 months in vivo. These are the results of vitro and vivo tests conducted by the scientists in Retinal Implant Research centre.

3.2.1 In Vitro-Tests

In order to evaluate parameters for subretinal electrical stimulation scientists established new in-vitro methods for electrical multisite stimulation of explanted retinas and multichannel recording of retinal activity. The aim of the study, which is still carried out at the NMI is to find stimulation paradigms that are suitable to evoke spatially structured ganglion cell activity within a safe operational range of the electrodes and the tissue and with an adequate dynamic range of the retinal output.


Fig 3.2.2 Functional electrical retina stimulation in vitro.

(A) Monofocal distal current injection: Pieces of whole mount retinas are attached to a microelectrode array (MEA) with the ganglion cell side facing the transparent glass plate and its embedded planar electrodes (asterisks). A tungsten electrode is lowered into the distal side of the retina. Monopolar charge balanced current pulses are applied (bundle of arrows from top). Fig B Shows Multisite charge injection: With the ganglion cell side up, multifocal stimulation of the distal retina side is obtained by applying voltage pulses to a variable number of electrodes of the MEA (bundle of arrows from bottom). The retinal response is recorded from ganglion cell bodies with a glass pipette. Fig (C) Sandwich preparation technique: A MPDA prototype chip is placed onto the distal retina side and is illuminated with flashes of light (arrow from bottom). Multi-unit ganglion cell activity evoked by the light generated photodiode current (bundle of arrows from top) is recorded with several MEA electrodes in parallel.

Retina segments from chicken or blind RCS rats were adhered to a microelectrode array (MEA) with 60 substrate integrated planar electrodes (diameter 10 µm, spacing 100 µm) either for distal stimulation or proximal recording. In the preparation where the photoreceptor side faces the MEA, retinal activity was evoked by stimulation with different geometrically defined voltage patterns. With this method, we were able to investigate the dependence of the retinal network response on the strength, shape and location of distally injected spatial charge patterns. This arrangement well imitates the in-vivo situation of a subretinal implant with embedded stimulation electrodes.
They found that application of different spatio-temporal voltage pattern via the electrode array resulted in well ordered spatio-temporal activity pattern in the retinal network. Median charge delivery at threshold was 0.4 nC/pulse/electrode (charge density 500 µC/cm²). The operational range for modulating the spike activity with distally injected charge covers about one to two orders of magnitude (charge in nC). The spatial resolution was 100 - 200 µm. The results also indicate that ganglion cells respond to charge injection within a circumscribed area with center and surround.
Threshold and operational range for subretinal stimulation:


Evoked retinal response related to the amount of injected charge. (A) Raster plot (40 trials) and cumulative response histogram (bin width 1 ms) to a single voltage pulse with 0.5 ms duration and increasing amplitude, applied via a platinized gold electrode to a chicken retina sample. In the histograms the number of spikes from 40 trials is given. (B) Relative ganglion cell response in a 40 ms window after pulse onset plotted against charge injected per pulse and electrode. At the upper axis the related voltage level and peak current are given. The error bars indicate the standard deviation of the number of spikes per trial within the analyzing window. The colored triangle indicates the operational range between the 10% and 90% response level. (C) Scatter diagram showing the charge thresholds for spot stimulation (n = 10). The line represents the median value (0.43 nC).
The experiments revealed that in a partly degenerated neuronal network information processing capabilities are present and can be activated by artificial inputs. This open up promising perspectives not only for the development of subretinally implanted stimulation devices as visual prostheses but also for the entire field of neurobionics and neurotechnology.

3.2.2 In Vivo Tests


Electrical signals from the brain - VEP

A special part of the brain, the visual cortex, is believed to be the entrance structure to visual perception and cognition Activity of nerve cells within the brain's surface (the cortex) produce electrical fields that can be picked up at some distance with electrodes (like ceiling microphones pick up sound from instruments in an orchestra during a concert). In humans these electrodes are simply "glued" on the scalp with a sticky paste on the back of the head. In the pig model special arrays of electrodes fixed on a silicone-carrier (Fig.1 B) are placed under the scull bone above the duration by neurosurgeons (Fig.1 A) and can be left there for several months.

When a visual stimulus (e.g. a blinking spot or a reversing checkerboard pattern) is presented within the visual field the electrical fields arising from the visual cortex change over time in a characteristic manner. These changes are measurable as voltage changes across the electrodes. They are referred to as "visually evoked potentials" or VEP.

VEP as an objective measure for visual function
VEPs are very informative about the visual system and its function. Each time a VEP can be recorded most probably a visual sensation has occurred. In humans VEP-curves vary in amplitude and time, dependent on the intensity, the location and the type of visual stimulus that is used to evoke the VEP. VEPs are an objective measure for central visual function. Since its anatomy and size is very close to the human eye, the pig is an ideal model to develop implantation techniques for subretinal devices or to test long-term stability and biocompatibility.

White light flashes of varying intensity were repeatedly presented to the anaesthetized pig. Electrical fields arising from the visual cortex in response to the stimulation were recorded with special amplifiers and further analyzed by computer. A) At high light intensities response amplitudes of up to 200 µV of voltage could be recorded. B) Following electrical stimulation with a subreitnal device evoked brain activity in the visual cortex.

In the last years scientists could prove that stimulation via a subretinal implant indeed led to a activation of the visual cortex both in pigs and rabbits . These electrically evoked "VEP" signals from the brain were similar in time course and amplitude to the VEP obtained by light stimulation (white flashes) of an equivalent retinal area.


Stimulus amplitudes of 600 mV (trace at 600 mV) evoked brain activity clearly above noise level (trace at 0 mV). The response amplitude further increased on increasing stimulation amplitudes. In comparison, the lowermost trace reflects the brain's response to a white light flash. Although the shape of the responses to electrical stimulation and to light stimulation matches closely, with the subretinal implant there is a much lower implicit time (~time from onset of the stimulus green line to the onset of a recordable response in the brain). This is probably due to the much faster propagation of the "light signal" through the subretinal prosthesis and the connected retinal cells.


The Chows originally tested their chip in blind animals and successfully produced visual sensations. Their device displays only black and white images and works best in well-lit rooms, but they hope that the addition of more solar cells on the chip will eventually improve the results. Much of this technology hinges upon the ability of the human eye to accept silicon chip implants, and six retinitis pigmentosa patients have undergone the procedure during the past year. Dr. Chow reports that, as yet, there has been no sign of rejection, infection, inflammation, or detachment, and that the patients (all affected by retinitis pigmentosa) are reporting improved vision.

A recent press release from Optobionics (May 2002) reported these positive results, and also that the chips seem to be stimulating remaining healthy cells. Initial expectations were to gain some light perception at the site of the implant, but improvement outside the implant areas is also being seen: something Dr. Chow calls a "rescue effect." His report was also presented at the 2002 meeting of the Association for Research in Vision and Ophthalmology (ARVO) in Ft. Lauderdale, Florida.
In addition to continuing to follow up on these six patients, the Optobionics company is planning more implants in the near future. This work by the Chows is for the purpose of determining the safety of the procedure in humans under FDA guidelines, and it will be several years before large-scale clinical trials will prove the efficacy of their approach.

The micro chip which should be designed for sub retinal implantation should small enough to be implanted in eye, supplied with continuous source of power, and it should be biocompatible with the eye tissues. To meet these requirements scientists in optobionic research centre have developed a device called artificial silicon retina.


3.2.3 STRUCTURE AND WORKING OF ASR

The ASR™ microchip is a silicon chip 2mm in diameter and 25 microns thick, less than the thickness of a human hair. It contains approximately 5,000 microscopic solar cells called “microphotodiodes,” each with its own stimulating electrode. These microphotodiodes are designed to convert the light energy from images into electrical chemical impulses that stimulate the remaining functional cells of the retina in patients and rp type or devices.


The ASR microchip is powered solely by incident light and does not require the use of external wires or batteries. When surgically implanted under the retina—in a location known as the “subretinal space”—the ASR chip is designed to produce visual signals similar to those produced by the photoreceptor layer. From their sub retinal location, these artificial “photoelectric” signals from the ASR microchip are in a position to induce biological visual signals in the remaining functional retinal cells which may be processed and sent via the optic nerve to the brain.

In preclinical laboratory testing, animal models implanted with the ASRs responded to light stimuli with retinal electrical signals (ERGs) and sometimes brain-wave signals (VEPs). The induction of these biological signals by the ASR chip indicated.

When a diode is reverse biased the electrons &holes move away from PN junction. If the photo diode is exposed to a series of light pulses the photon generated minority carriers must diffuse to the junction &should be swept across to the other side in a very short time. Therefore its decided that he width of the depletion region is be large enough that most of the photons are absorbed within the depletion region rather than in the neutral PN junction region. Photodiode can work in two modes. One in which the external circuit delivers power to the device other in which device gives power to the external circuit. Therefore it can be called as a solar cell.

The ASR is powered solely by the incident light &does not require the use of external wires or batteries. When surgically implanted under the retina in a location known as subretinal space the ASR is designed to produce visual signals similar to those produced by the photoreceptor layer. Thus a photodiode produces a voltage corresponding to the light energy incident on it. Solar cells in the device's microchip are supposed to replace the function of the retina's light-sensing cells that have been damaged by disease.

The ASR microchip relies on the ability to stimulate the remaining functional cells within a partially degenerated inner or neuro retina. As a result, the ASR chip will not be able to assist patients with conditions where the retina or visual pathway is more substantially damaged.

3.2.4 IMPLANT DESIGN AND FABRICATION

The current micro photodiode array (MPA) is comprised of a regular array of individual photodiode subunits, each approximately 20×20-µm square and separated by 10-µm channel stops. Across the different generations examined, the implants have decreased in thickness, from ~250 µm for the earlier devices, to approximately 50 µm for the devices that are currently being used. Because implants are designed to be powered solely by incident light, there are no connections to an external power supply or other device. In their final form, devices generate current in response to a wavelength range of 500 to 1100 nm.


Implants are comprised of a doped and ion-implanted silicon substrate disk to produce a PiN (positive-intrinsic-negative) junction. Fabrication begins with a 7.6-cm diameter semiconductor grade N-type silicon wafer. For the MPA device, a photo mask is used to ion-implant shallow P+ doped wells into the front surface of the wafer, separated by channel stops in a pattern of individual micro photodiodes. An intrinsic layer automatically forms at the boundary between the P+-doped wells and the N-type substrate of the wafer. The back of the wafer is then ion-implanted to produce a N+ surface.

Thereafter, an insulating layer of silicon nitrate is deposited on the front of the wafer, covering the entire surface except for the well openings. A thin adhesion layer, of chromium or titanium, is then deposited over the P+ and N+ layers. A transparent electrode layer of gold, iridium/iridium oxide, or platinum, is deposited on the front well side, and on the back ground side. In its simplest form, the photodiode and electrode layers are the same size. However, the current density available at each individual micro photodiode subunit can be increased by increasing the photodiode collector to electrode area ratio.

Implant finishing involves several steps. Smaller square devices are produced by diamond sawing, affixed to a spindle using optical pitch, ground, and then polished to produce the final round devices for implantation. The diameter of these devices has ranged from 2-3 mm (for implantation into the rabbit or cat sub retinal space) to ~0.8 mm (for implantation into the smaller eye of the rat).


fig 3.2.5 Schematic cross section through a micro-photodiode (left) and micrograph of surface obtained by scanning electron microscopy. The micro-photodiodes convert light impinging on the surface of the chip to electric current which is delivered to the tissue via micro-electrodes.


Fig 3.2.6: Titanium is sputtered at high pressure in a nitrogen atmosphere to obtain nano-porous titanium nitride (TiN) stimulation electrodes on the implant. This enables enhancement of electrode surface area by a factor of up to 100 which is a critical prerequisite for efficient charge transfer from chip to tissue. SEM micrograph of thin film electrode: nano-porous surface texture provides for excellent charge transfer from chip to tissue.

Fig 3.2.8: The electric performance of the interface between chip and tissue is critical for the proper function of the implant. From the point of view of an electrical engineer, this interface acts like a capacitor. For this reason, no DC currents may be used in electro-stimulation but only current transients may be applied. Micro-graph of cross-section through retinal tissue on micro-photodiode obtained by transmission electron microscopy. The electrical properties of the interface may be described by an equivalent circuit. Only transient current pulses may be used to stimulate tissue.

ASR IMPLANT PROCEDURE


The microsurgical procedure consists of a standard vitrectomy plus an additional step. The surgeon starts by making three tiny incisions in the white part of the subject’s eye, each incision no larger than the diameter of a needle. Through these incisions, the surgeon removes the gel in the middle of the eye and replaces it with saline. The surgeon then make an opening in the retina through which fluid is injected: the fluid lifts up a portion of the retina from the back of the eye and creates a small pocket in the “subretinal space” just wide enough to accommodate the ASR microchip.
The surgeon then slides the implant into the subretinal space, much as one might slip a tiny coin into a pocket. Finally, the surgeon introduces air into the middle of the eye to gently push the retina back down over the implant. Over a period of one or two days, the air bubble is reabsorbed and replaced by fluids created within the eye. The procedure takes about 2 hours and is done on a hospital outpatient basis.

4. Cortical Implants

Scientists have created a device that allows them to communicate directly with large numbers of individual nerve cells in the visual part of the brain. The device is a silicon electrode array may provide a means through which a limited but useful visual sense may be restored to profoundly blind individuals.


This shows the development of the first visual prosthesis providing useful "artificial vision" to a blind volunteer by connecting a digital video camera, computer, and associated electronics to the visual cortex of his brain. This device has been the objective of a development effort begun by our group in 1968 and represents realization of the prediction of an artificial vision system made by Benjamin Franklin in his report on the "kite and key" experiment.

This new visual prosthesis produces black and white display of visual cortex "phosphenes" analogous to the images projected on the light bulb arrays of some sports stadium scoreboards. The system was primarily designed to promote independent mobility, not reading. It has a battery powered, electronic interface that is RF isolated from line currents for safety. This interface can replace the camera, permitting the volunteer to directly watch television and use a computer, including access to the Internet. Because of their potential importance for education, and to help integrate blind people into the workforce, such television, computer, and Internet capabilities may prove even more valuable in the future than independent mobility.

First of all passing an electric current through a single electrode into the visual cortex causes a blind subject to see a point of light called a phosphene. The visual scene before the subject will be encoded by miniature video camera attached to a pair of eye glasses. The resulting video signals will be processed by custom circuitry. The processed signals pass across the skull to an array of electrodes implanted in the primary visual cortex.

Relaying the electric signals to the cortical implant could be accomplished by two methods- conductive and inductive. In the former connectors are attached to the cranium and provide access to the external circuitry with the later a transformer is formed with one coil under the skin and the other one on the outside.


Fig 4.1 Cortical Implant
A platinum foil ground plant is perforated with a hexagonal array of 5 mm diameter holes on 3 mm centers, and the flat platinum electrodes centered in each hole are 1 mm in diameter. This ground plane keeps all current beneath the dura. This eliminates discomfort due to dural excitation when stimulating some single electrodes (such as number 19) and when other arrays of electrodes are stimulated simultaneously. The ground plane also eliminates most phosphene interactions when multiple electrodes are stimulated simultaneously, and provides an additional measure of electrical safety that is not possible when stimulating between cortical electrodes and a ground plane outside the skull. Each electrode is connected by a separate teflon insulated wire to a connector contained in a carbon percutaneous pedestal.

When stimulated, each electrode produces 1-4 closely spaced phosphenes. Each phosphene in a cluster ranges up to the diameter of a pencil at arms length. Neighboring phosphenes in each cluster are generally too close to the adjacent phosphenes for another phosphene to be located between them. indicate the primary visual cortex (area 17) would permit placement of 256 surface electrodes on 3 mm centers on each lobe in most humans (512 electrodes total).


4.1 The Electronics Package

The 292 X 512 pixel CCD black and white television camera is powered by a 9 V battery, and connects via a battery powered NTSC link to a sub-notebook computer in a belt pack. This f 14.5 camera, with a 69° field of view, uses a pinhole aperture, instead of a lens, to minimize size and weight. It also incorporates an electronic "iris" for automatic exposure control.


The sub-notebook computer incorporates a 120 MHz microprocessor with 32 MB of RAM and a 1.5 GB hard drive. It also has an LCD screen and keyboard. It was selected because of its very small size and light weight. The belt pack also contains a second microcontroller, and associated electronics to stimulate the brain. This stimulus generator is connected through a percutaneous pedestal to the electrodes implanted on the visual cortex. The computer and electronics package together are about the size of a dictionary and weigh approximately 10 pounds, including camera, cables, and rechargeable batteries. The battery pack for the computer will operate for approximately 3 hours and the battery pack for the other electronics will operate for approximately 6 hours.

This general architecture, in which one computer interfaces with the camera and a second computer controls the stimulating electronics, has been used by us in this, and four other substantially equivalent systems, since 1969. (9) The software involves approximately 25,000 lines of code in addition to the sub-notebooks' operating system. Most of the code is written in C++, while some is written in C. The second microcontroller is programmed in assembly language.

4.2 Stimulation Parameters

Stimulation delivered to each electrode typically consists of a train of six pulses delivered at 30 Hz to produce each frame of the image. Frames have been produced with 1-50 pulses, and frame rates have been varied from 1 to 20 frames per second. As expected, (4) frame rates of 4 per second currently seem best, even with trains containing only a single pulse. Each pulse is symmetric, biphasic (-/+) with a pulse width of 500 µsec per phase (1,000 µsec total). Threshold amplitudes of 10-20 volts (zero-peak) may vary +/-20% from day to day; they are higher than the thresholds of similar electrodes without the ground plane, presumably because current shunts across the surface of the pia-archnoid and encapsulating membrane.


The system is calibrated each morning by recomputing the thresholds for each electrode, a simple procedure that takes the volunteer approximately 15 minutes with a numeric keypad.
Although stimulation of visual cortex in sighted patients (2) frequently produces colored phosphenes, the phosphenes reported by this volunteer (and all previous blind volunteers to the best of our knowledge) are colorless. We speculate that this is the result of post-deprivation deterioration of the cells and/or senaphtic connections required for color vision. Consequently, color vision may never be possible in this volunteer or in future patients. However, optical filters could help differentiate colors, and it is also conceivable that chromatic sensations could be produced if future patients are implanted shortly after being blinded, before atrophy of the neural network responsible for color vision.
The problem kindling of neural tissues or the triggering of seizures in those tissues by periodic electrical stimulation has to be solved. Biocompatibility is another issue of concern. This particular vexing problem has yet to be solved. A power supply to the system has to be efficiently designed. The position of the implant within the skull has to be decided upon. Lastly the implant should function flawlessly for years.


Conclusion And Future Scope

The application of the research work done is directed towards the people who are visually impaired. People suffering from low vision to, people who are completely blind will benefit from this project. The findings regarding biocompatibility of implant materials will aid in other similar attempts for in human machine interface. Congenital defects in the body, which cannot be fully corrected through surgery, can then be corrected.

There has been marked increase in research and clinical work aimed at understanding low vision. Future work has to be focused on the optimization and further miniaturization of the implant modules. Commercially available systems have started emerging that integrates video technology, image processing and low vision research.

Implementation of an Artificial Eye has advantages. An electronic eye is more precise and enduring than a biological eye and we cannot altogether say that this would be used only to benefit the human race. In short successful implementation of a bioelectronic eye would solve many of the visual anomalities suffered by human’s to date.

To be honest, the final visual outcome of a patient can not be predicted. However, before implantation several tests have to be performed with which the potential postoperative function can be estimated. With this recognition of large objects and the restoration of the day-night cycle are the primary goals of the prototype implant

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.