Saturday, May 16, 2015

Time for Robo-Chefs...!!!

robo chef
Don’t have time to cook breakfast in the morning? Or tired and don’t feel like cooking dinner... In the future, a robotic kitchen may take care of that for you. The robotic household help has been the ultimate in futuristic dream products.

British engineer Mark Oleynik and his team over at Moley Robotics unveiled a robot chef comprised of two robotic arms in a specially designed kitchen, which includes a stove top, utensils and a sink, the device is able to reproduce the movements of a human chef in order to create a meal from scratch. The robot could then do everything from assembling and chopping all the ingredients, grasping utensils, pots, dishes, doing the cooking on the hob or in the oven, and finishing up by cleaning the dirty pans. The robot learns the movements after they are performed by a human chef, captured on a 3D camera and uploaded into the computer which is turned into an algorithm driving the automated kitchen.

robo chefInventor Mark Oleynik, a computer scientist, said that the limbs, are able to “faithfully reproduce the movements of a human hand”. The hands themselves are sophisticated creations, made up of 24 motors, 26 microcontrollers, and 129 sensors, with a sophisticated operating system called ROS, to cook food better than a human chef. Shadow Robot Company, a NASA robotics supplier, created the arms, which replicate movements in the arms, hands, shoulders, wrist, fingers and elbows.

There is also a thermometer for keeping an eye on the temperature of raw ingredients and stop them from going off; the Moley team says a future version could have synthetic hands and be able to wash itself after handling raw meat. Another feature is the protective glass front and fire extinguisher system, making the robot safe to use around children and when you are not at home.

Moley Robotics demonstrated its concept at this year's Hannover Messe, a big trade fair for industrial technology held annually in Germany. At demonstration, the robotic chef prepared a bowl of crab bisque from a recipe created by Tim Anderson, winner of the BBC’s Masterchef competition in 2011, in just under 30 minutes. Mr Anderson was originally recorded making the dish wearing motion sensor gloves, and the robot was then programmed to imitate his movements down to the tiniest detail.

Eventually, Moley hopes to produce a version complete with cameras so that users can teach it to create their own dishes, which can then be uploaded to a digital recipe library and shared with other people. They also want later models to be capable of dealing with tricky things like stopping mixing at the appropriate time to prevent splitting or over-beating.

Don’t be afraid professional chefs; robot will not replace you. Not yet, at least, because it won't try and improvise or use an intuition to a dish. It also lacks spontaneity. Moreover robot can't taste or smell both critical abilities when it comes to cooking. It can't really understand flavours.

It doesn’t pick things up better than humans, though. Because it can’t see, it can only recognise an ingredient if it has been placed in a pre-determined position. Otherwise, its attempt to reach for an item will find it doing robotic jazz hands into empty air or sending mixing bowls spinning. The machine has yet to be taught to use knives and is entirely limited by the movements and speed of its tutor.

According to Moley’s website, the firm hopes to bring a consumer version to market by 2017 that will feature several additions, including a library of thousands of recipes, a dishwasher and a refrigerator. Marvellous!! You will even be able to control it remotely using an app, which means you could order your dish to be ready for when you get home.

If the hands can be taught to cook, according to the designers, there's no reason they couldn't play the piano, learn carpentry and more. But the company's primary aim is to produce a technology that addresses basic human needs and improves day-to-day quality of life.

It is designed only for the home. Of course, it is designed for the ridiculously wealthy home with a price tag of around £10,000 ($15,000). According to Moley, it's not an industrial device as it's not a device that works at 10-times normal speed. It's a device that moves like us, and at the same speed as we do. But I think this would be a great asset for hotels where you need to satisfy a wide range of customers, because robots can do multi tasking, i.e. same robot can cook multiple dishes, thereby in effect it can replicate same dishes for many customers in short time span.  It would be great if they can reduce its cost of construction. I hope in future either Moley or some other will definitely come with much cheaper and efficient version, because more research on  somewhat similar technologies are already going on countries like Japan and India.


In the current prototype, the ingredients need to be prepared in advance (the robot has not yet been trusted with knives) and placed at preset positions for it to pick up. That, though, should change with future versions. Moley wants to make the unit slightly more compact, and give it a built-in refrigerator in which a stock of ingredients can be stored and selected by the robot as required and dishwasher which can help to clean up after itself. With further development the automated kitchen will be made more compact and gain more equipment. And it can also, if desired, be switched to manual, because all of the implements and utensils involved are pieces of normal kitchenware. At the present time, it seems that the robochef will be most useful as an assistant, not an individual cook, but time will tell as the device come closer to fruition.

According to inventors, for anyone who want to remake the taste exactly the same way, this would be a recipe re-maker again and again. It’s because it can replicate taste by mimicking the movements of a human chef in a controlled way, every single time. I.e. the Robotic Kitchen is essentially a culinary photocopier, producing batches of crab bisque (the dish that is demonstrated) entirely from memory. What better way to learn how to make a complex dish than to watch a pair of humanoid hands make it a few times. In short it can teach us how to become better cooks. Remember, only humans make variations, robots are reliable as they make guarantee dishes each time.


I am sure this would be a great effort but what I fear is  whether it turns out to be useless. I have a small doubt: is it really possible to reproduce exact taste of a chef from any part of the world in our kitchen using robochef? As all know, no ingredient is alike, even salt. The quality may vary from place to place and also if your room temperature is colder than the initial run, or if the ingredients are a few degrees warmer, or if raw ingredients are shaped ever-so-slightly differently than before, it all falls apart. Also I am not convinced by the so called "humanoid hands", my suggestion is robotic hand from inspiration from elephant's trunk which is much more flexible. I think there is a room for improvement in computer vision, and additional sensors need to be developed for better working.



The product is still two years away from market. The Robo-chef is hugely impressive and this could be a very big and innocuous step toward the future of home life. I hope it will bring the next industrial revolution to the homes of average consumers. Also sees a future where the Moley robot could help out all over the house.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Sunday, May 10, 2015

Low Battery?? Yell at your Phone to Charge it....

Yell at your phone


Imagine that one day you could charge your mobile phone, just by placing them in noisy areas or by yelling at them. Then you don’t have to struggle for the dead phone battery while travelling or during an outing when you are away from power sources. It’s no more an imagination…. Scientists have come up with a postage stamp-sized microphone out of paper that could boost your phone’s battery regulating sound. 

The scientists at the Georgia Institute of Technology developed a rollable, paper-based triboelectric nanogenerator (TENG) with 125 μm thickness for harvesting sound wave energy, which is capable of delivering a maximum power density of 121 mW/m2 and 968 W/m3 under a sound pressure of 117 dBSPL. The amount of power the microphone provides depends on its size, but it's around 121 milliwatts per square meter. The TENG is designed in the contact-separation mode using membranes that have rationally designed holes at one side.

How this works? The researchers used a laser to zap a grid of microscopic holes in the paper, then coated one side in copper and laid it on top of a thin sheet of Teflon, joining the two sheets at one edge. Sound waves vibrate the two sheets in different ways, causing them to come in and out of contact. This generates an electric charge, similar to the one made when your rub a balloon on your hair, which can charge a phone slowly. The vibration creates an electric charge which can be used to charge a capacitor at the rate of 0.144 V/s.

Literally it does the recycling of sound energy from the environment, where one could get free electricity from the 'waste' sounds all around us. The charge can also be converted into a range of sound frequencies, allowing the initial sounds to be amplified.

What’s a nanogenerator? A nanogenerator is a device that utilizes piezoelectrics, triboelectrics, or paraelectrics, or all three of them, to convert mechanical action, thermal action, or other action into electricity for powering small electronic devices, mostly by converting mechanical energy. Triboelectric nanogenerator (TENG) uses the electrostatic charge created due to the triboelectrification process as a driving force for electron flow to an external load. Using this process today, we can achieve 55 percent energy conversion efficiency, the best so far. To know more about nanogenerators please follow the link to Wiki 


The scientists said the concept and design could be applied to a variety of circumstances for energy harvesting or sensing purposes.  The advantages of a broad working bandwidth, thin structure, and flexibility, a self-powered microphone for sound recording with rolled structure is demonstrated for all-sound recording without an angular dependence. The concept can be extensively applied to a variety of circumstances for either energy-harvesting or sensing purposes, would be toward wearable and flexible electronics, military surveillance, jet engine noise reduction, a low-cost implantable human ear and wireless technology.

The main benefit of such a microphone is that it could harvest acoustic energy to top up a phone charge on the go. The TENG can be implemented onto a commercial cell phone for acoustic energy harvesting from human talking. While the hope is that sound-powered devices could replace conventional chargers soon. It may not produce quite enough energy to do away with current charging methods entirely as it would only provide a small amount of power rather than fully charging the phone.

Transforming sound into battery power is not a novel idea. We first heard of a sound-charging phone that would power itself with the user's voice when a team of Korean researchers revealed their prototype in 2011.  However, it seems much more likely that a sound-powered charging may soon be a reality.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Thursday, April 30, 2015

Unobtrusive Thumbnail Trackpad NailO


Just imagine, you had to turn the recipe page on your laptop or tablet while you are preparing a meal or you have to send a message while attending a meeting. Sounds difficult isn't it? Not anymore… The researchers have come up with a novel solution that allows discreet one-handed input via a surface that's always readily available. Yes it’s useful in situations where gestures or speech input could be considered impolite or inappropriate, or both hands are busy.

A new wearable device called NailO, is a Bluetooth trackpad that's temporarily adhered to the user's thumbnail and can be controlled by running an index finger over its surface. Inspired by decorative nail stickers, NailO involves multilayered miniaturized hardware that wirelessly transmits data, via Bluetooth, to a mobile device or PC. The NailO uses the same capacitive technology as smartphone screens and connects via Bluetooth to parent devices.

NailO consists of a battery, capacitive sensors, a microcontroller, a Bluetooth radio chip and a capacitive-sensing chip packed into the tight quarters. These components work together to send information wirelessly to your smart device or PC through Bluetooth. It is so small and lightweight that it can be stuck onto a user’s thumbnail.



Nailo working
It is user friendly. I.e. to use it, users first power it up by maintaining finger contact with it for two or three seconds. They then move their index finger up, down, left or right across its surface, guiding the mouse on the paired computer. To select something on screen, they just press their finger down.

The main advantage of the device is that it’s discrete. Running a finger over a thumbnail is a natural activity, so most people wouldn't notice this as a deliberate action to control a device.  This technology could let users control wireless devices when their hands are full like answering the phone while cooking. It could also augment other interfaces, allowing someone texting on a cell phone, for example, to toggle between symbol sets without interrupting his or her typing. Finally, it could enable subtle communication in circumstances that require it, such as sending a quick text to a child while attending an important meeting.
Nailo working


You would be thinking why thumbnail, answer is so simple... It’s a hard surface with no nerve endings, so a device affixed to it wouldn’t impair movement or cause discomfort. Also it’s easily accessed by the other fingers, even when the user is holding something in his or her hand.

  For the initial prototype, the team built their sensors by printing copper electrodes on sheets of flexible polyester. That allowed them to experiment with a range of electrode layouts, but now they're using off-the-shelf sheets of electrodes like those found in some touchpads.

  According to authors Cindy Hsin-Liu Kao, an MIT graduate student, the device was inspired by the colourful stickers that some women apply to their nails.  They are looking for a commercial version of their device having a detachable membrane on its surface, so that users could coordinate surface patterns with their outfits. 

 Researchers are looking to consolidate the components into a single chip, which will make it smaller and reduce power consumption. And they are already talking to manufacturers in China about a battery that could fit in the space of a thumbnail and is only half a millimetre thick. NailO users would ultimately be able to map gestures to specific actions (left thumbnail swipe = Call Mom, for example).

 In a video demonstrating, it is shown being used to scroll through a recipe while the wearer's hands are otherwise occupied holding spoons while preparing food. 




 You won't be able to get a NailO of your own for some time but there is a prototype showing a hope… This unobtrusive wearable sensor could operate digital devices or augment other device interfaces in future. Anyways it’s one of the more innovative and unusual wearables we've seen in recent months turning our nails into a trackpad and making us a part of computer….

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Tuesday, April 21, 2015

Now Battery can be charged in 1 Minute.

Smartphone charging

Our smartphones have become more advanced and power-hungry over the last few years, scientists have been looking at ways to improve battery life for some time. Finally they have come with cheap, long lasting and quick to recharge batteries. This is the battery for your phone you have been waiting for. New battery technology uses aluminium and graphite, and promises to make charging much quicker, less often and far safer. Stanford University researchers have developed an ultra-fast charging aluminium battery.

Conventional alkaline batteries are bad for the environment while lithium-ion batteries used in millions of laptops and smartphones can unexpectedly burst into flames and take a long time to charge. However the new battery is fast-charging, long-lasting and inexpensive. It is also flexible too and can be used in new folding devices in development.

The prototype consists of a soft pouch, containing aluminium for one electrode and a graphite foam, a sponge-like pattern of tiny whiskers of the stuff, surrounding many empty pockets, which allows ions in the electrolyte solution very easy access to the graphite, helping the battery to work faster.

When the battery discharges, aluminium dissolves at the anode, while aluminium-containing ions slide into the spaces between atomic graphite layers at the cathode. When it charges again, the reverse occurs, depositing metallic aluminium metal back on the anode.

Because it is lightweight and inexpensive, aluminium has attracted interest from battery engineers for many years, but it has never yielded a viable product. The trouble for engineers has been finding the right material to pair with aluminium, a material capable of producing high voltage especially after multiple cycles of rapid charging and discharging. Graphite, a form of carbon in which the atoms form thin, flat sheets, turned out to deliver very good performance, while also being similarly lightweight, cheap and widely available.

It offers safety advantages over lithium batteries that power most mobile devices, the materials used are less volatile and do not catch fire if perforated. Lithium-ion batteries can be a fire hazard. While lithium-ion battery can take hours to charge, the new battery recharges in one minute. Also the electrolyte is basically a salt that's liquid at room temperature, held inside a flexible polymer-coated pouch, so it's very safe. This is in contrast with the flammable electrolytes used in lithium-ion batteries.

The new battery can be recharged around 7,500 times. Typical lithium-ion batteries used in everything from smartphones and laptops to electric cars last around 1,000 recharge cycles. Another feature of the aluminium battery is flexibility. You can bend it and fold it, so it has the potential for use in flexible electronic devices.

Aluminium ion battery
The battery can generate around 2 volts of electricity, which is less than the 3.6 volts from a conventional lithium-ion battery, but the highest achieved with aluminium. Its energy density, the amount of electrical energy stored in a given unit of mass, is also lower. The aluminium-ion battery developed has an energy density of 40 watts per kilogram compared to between 100 and 260 watts per kilogram for lithium ion. However, improvements in the cathode material could eventually lead to a higher voltage and energy density, the researchers believe. But the team already managed to charge a smartphone in a minute by strapping two aluminium batteries together and put them into an adapter.

According to Clare Grey from the University of Cambridge, turning the prototype into a larger commercial product is challenging. One problem is that the process of squeezing ions in between the graphite sheets can cause the material to expand and contract, which is "bad news for the battery”. Also, the bigger the graphite sheets are, the further the ions have got to diffuse in, so the slower it gets. So part of reason it's got this high rate is that it's got very small platelets of graphite.

Beyond small electronics devices, it can be used for storing renewable energy on the electrical grid, which could solve some of the current problems presented by inconsistent renewable sources like wind and solar. Lithium cells’ durability means they aren't ideally suited for this application. But aluminium cell’s ability to last for tens of thousands of charges, and their rapid charge and discharge, offers a possible future alternative.


In the video Stanford graduate student Ming Gong and postdoctoral scholar Yingpeng Wu demonstrate how the new technology could offer a safe alternative to lithium-ion and other batteries.

Its unprecedented speed in charging makes it attractive and impressive. It has all features a dream battery would have: inexpensive electrodes, good safety, high-speed charging, flexibility and long cycle life. This technology shows a promise which is unlikely to be featuring in our smartphones any time soon. It’s sure that these environment friendly aluminium-ion batteries could result in safer consumer electronics in future.

Please share your views also....

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Monday, April 20, 2015

Indian GPS or We call it Desi GPS.

IRNSS

      India is all set to rule skies with its own navigational system. The fourth in the series of seven navigational satellites IRNSS-1D is successfully launched from Sriharikota. With this launch, the country is poised to operationalise the Indian Regional Navigation Satellite System.

       The latest in IRNSS series was IRNSS-1D having a mission life of 10 years. The rocket, the Polar Satellite Launch Vehicle (PSLV-C27), standing around 44 metres tall and weighing around 320 tonnes carried 1,425 kg IRNSS-1D. The rocket blasted off from the second launch pad at the Satish Dhawan Space Centre here, around 80 km from Chennai.  

     The two solar panels of IRNSS-1D consist of Ultra Triple Junction solar cells which generate about 1660 Watts of electrical power. Sun and Star sensors as well as gyroscopes provide orientation reference for the satellite. Special thermal control schemes have been designed and implemented for some of the critical elements such as atomic clocks. The Attitude and Orbit Control System (AOCS) of IRNSS-1D maintains the satellite's orientation with the help of reaction wheels, magnetic torquers and thrusters. Its propulsion system consists of a Liquid Apogee Motor (LAM) and thrusters.

     IRNSS-1D is first launched into a sub Geosynchronous Transfer Orbit (sub GTO) with a 284 km perigee (nearest point to Earth) and 20,650 km apogee (farthest point to Earth) with an inclination of 19.2 deg with respect to the equatorial plane. After injection into this preliminary orbit, the two solar panels of IRNSS-1D are automatically deployed in quick succession and the Master Control Facility (MCF) at Hassan took control of the satellite and performed the initial orbit raising manoeuvres consisting of one manoeuvre at perigee (nearest point to earth) and three at apogee (farthest point to earth). For these manoeuvres, the Liquid Apogee Motor (LAM) of the satellite is used, thereby finally placing it in the circular geostationary orbit at its designated location.

      IRNSS-1D satellite has two payloads: a navigation payload and CDMA ranging payload in addition with a laser retro-reflector which can be used for range calibration to determine precisely the spacecraft’s position in space. The payload generates navigation signals at L5 and S-band. The design of the payload makes the IRNSS system inter-operable and compatible with Global Positioning System (GPS) and Galileo. Reaction wheels and magnetorquers are also fitted to help control the spacecraft’s attitude. 

      The GPS is a space-based satellite navigation system managed by the United States that provides location and time information in all weather conditions, anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. The system provides critical capabilities to military, civil and commercial users around the world. It is freely accessible to anyone with a GPS receiver. Russia’s GLONASS, China’s Beidou and EU’s Galileo too work on the same scenario.

      The INRSS Architecture will consist of three segments: space, ground and user. 


IRNSS Architecture
IRNSS Architecture


  • The IRNSS Space Segment: consists of a constellation of seven satellites which will focus on the region up to 1,500 km beyond India’s boundaries, between longitude 40° E and 140° E, and latitude ± 40°. Three will be placed in geostationary orbit located at 34° E, 83° E and 131.5° E; the other four in geosynchronous orbit at an inclination angle of 29°, two each with the equator crossing at 55° E and 111° E. The three geostationary satellites will appear fixed in the sky, while the four geosynchronous satellites will appear to move in the figure of ‘8’ when observed from the ground.
  • IRNSS ground segment: consists ground stations for generation and transmission of navigation parameters, satellite control, satellite ranging and monitoring. The critical ground segment of the IRNSS is ISRO Navigation Centre (INC) located at Byalalu, about 40 km from Bangalore. INC is responsible for providing the time reference, generation of navigation messages and monitoring and control of ground facilities including ranging stations of IRNSS. It hosts several key technical facilities for supporting various navigation functions. 
  • The IRNSS User segment: is made of the IRNSS receivers. They will be dual-frequency receivers (L5 and S band frequencies) or single frequency (L5 or S band frequency) with capability to receive ionospheric correction. They will be able to receive and process navigation data from other GNSS constellations and the seven IRNSS satellites will be continuously tracked by the user receiver. The user receiver will have a minimum gain G/T of -27 dB/K.
    
    IRNSS has a different configuration than other global navigational systems. Normally, navigational satellites, like the American Global Positioning System (GPS), are positioned in medium Earth orbit (MEO). In case of IRNSS, four satellites will be in inclined geosynchronous orbits and the remaining three satellites in the geostationary orbit. This 'desi GPS' will be similar in function to the American Global Positioning System (GPS) but regional in coverage. It will provide two types of services:

  • Standard Positioning Service (SPS for civil usage and will be provided to all the users
  • Restricted Service (RS), an encrypted service provided only to the authorized users, mainly to security and intelligence organizations(military)
IRNSS System
IRNSS System

  The main features of IRNSS include: 

  • It will provide a position accuracy of better than 20 meters in the primary service area.
  • IRNSS consists of a space segment and a ground segment; the space segment comprising seven satellites, with three satellites in geostationary orbit and four satellites in inclined geosynchronous orbit.
  • IRNSS satellites would revolve round the earth at the height of about 36,000 kilometres from the earth's surface. 
  • It will be useful in land, sea and air navigation, disaster management, vehicle tracking and fleet management, integration with mobile phones, provision of precise time, mapping, and navigation aid for hikers and travellers, visual and voice navigation for drivers.
  • It can track people or vehicles and can be of immense use in disaster situations like last year’s flash floods in Uttarahand. 
  • It will be a boon for the railways for tracking wagons.
  • A highly accurate Rubidium atomic clock is part of the navigation payload of the satellite.
  • Highly accurate position, velocity and time information in real time for authorized users on a variety of vehicles.
  • Data with good accuracy for a single frequency user with the help of Ionospheric corrections.
  • All weather operation on a 24 hour basis.

    After America, Russia, Europe, China and Japan, India will be the sixth country in the world to have this system. This is necessary in times of war since most modern precision bombs and missiles depend on accurate positioning. Till now most of us have been relying on the American GPS which is very popular on smart phones but not good enough for military applications as it can't be relied upon for seamless coverage in times of war.

     India’s adversaries, like China and Pakistan, are nuclear weapons countries and they also have a significant inventory of state-of-art missiles. There also exists a possibility that the US could deny others access to GPS during political disagreements. Like in the past, during Kargil war, US suspended GPS for both India as well as Pakistan.  Naturally, India needs to have a reliable and accurate space-based navigational system. Hence our own IRNSS.

   Predecessors IRNSS-1A, 1B and 1C were launched by PSLV-C22, PSLV-C24 and PSLV-C26 in July 2013, April 2014 and October 2014 respectively. All the satellites are functioning satisfactorily from their designated orbital positions. India expects to complete the seven-satellite constellation by end of this year or early next year by launching the remaining three satellites in quick succession.

   IRNSS will provide positional accuracies similar to the GPS: 10 m over the Indian landmass, 20 m over the Indian Ocean. As is the case with GPS and the US military, IRNSS will provide a more accurate restricted service for the Indian armed forces and other special authorised users. It is not so accurate as GPS(accuracy in mm), but in short future ISRO will improve its accuracy in millimetre range. 

   After IRNSS-1D becomes operational in the coming months, ISRO will make India a proud owner of its own indigenous regional, satellite navigation system. Once the regional navigation system is in place, India need not be dependent on other platforms.  India also proposes to offer the IRNSS signals to the states in its neighbourhood, a novel way of using space assets for diplomacy. It appears that for India IRNSS is not about any competition, but instead a way to attain strategic parity.

Please give your views.....

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Friday, April 17, 2015

Biopsy Using Smart phones.

      

Don't get surprised…The camera on your phone could soon help save your life by testing to see if you have cancer. Yes, cancer can be diagnosed in less than an hour using a smartphone app… A smartphone-based device developed by researchers at Massachusetts General Hospital (MGH), US can help doctors perform rapid and accurate molecular diagnosis of cancerous or non-cancerous tumours.

In a recent study, the researchers describe a smartphone-based device that uses technology for making holograms to collect detailed microscopic images for digital analysis of the molecular composition of cells and tissues. The device, called the D3 (digital diffraction diagnosis) system, features an imaging module with a battery-powered LED light clipped onto a standard smartphone that records high-resolution imaging data with its camera. These images can then be sent to a central computer for analysis and then the result returned in less than 45 minutes.

Let’s see how D3 will work. 
  • A tissue sample is taken from a biopsy or blood from a simple finger prick and is mixed with microbeads labelled with specific antibodies.
  • This mixture is then placed on a slide which is inserted into a module that can clip onto the camera of a smartphone.
  • An LED at the back of the module illuminates the sample on the slide and lens in the module magnifies the image, which is then captured using the camera on the phone.
  • When clumped around a cell, the beads alter the way the light is scattered by the sample.
  • They produce distinctive diffraction patterns in in the image if clumped together. 
  • The user can send this image to a central computer for analysis.
The use of variously sized or coated beads may offer unique diffraction signatures to facilitate detection. A numerical algorithm developed by the research team for the D3 platform can distinguish cells from beads and analyse as much as 10 MB of data in less than nine-hundredths of a second. The data is transmitted for analysis to a remote graphic-processing server via a secure, encrypted cloud service. The results can be rapidly returned to the point of care.
D3 testing patterns

Advantages include:                                                   
  • Based on the number of antibody-tagged microbeads binding to cells, D3 analysis promptly and reliably categorised biopsy samples as high-risk, low-risk or benign, with results matching those of conventional pathologic analysis.
  • With a much greater field of view than traditional microscopy, the D3 system is capable of recording data on more than 100,000 cells from a blood or tissue sample in a single image.
  • The data generated by the system matches with the conventional gold standard pathology or HPV testing for molecular profiling.
  • A single cancer diagnosis test costs around £1.20 ($1.80). 
D3 analysis of fine-needle lymph node biopsy samples was accurately able to differentiate four patients whose lymphoma diagnosis was confirmed by conventional pathology from another four with benign lymph node enlargement. Besides protein analysis, the system was enhanced to successfully detect DNA, in this instance from human papilloma virus, with great sensitivity. The scientists used an iPhone 4S in their tests which means an 8MP camera is enough.

Smartphone app for diagnosing cancer working

They also used it to detect infection with human papilloma virus, which is thought to cause the cancer. Having filed a patent application for the D3 technology, the researchers will further test the device in resource-limited areas.  In future they hopes to use the D3 System to test for cervical cancer, a third most prevalent cancer with most of that occurring outside the U.S.

Smartphones and wearable electronics have advanced tremendously over the last several years but fall short of allowing their use for molecular diagnostics. Because the system is compact, easy to operate, and readily integrated with the standard, portable smartphone, this approach could enable medical diagnostics in geographically and/or socioeconomically limited areas to accurately and cheaply diagnose cancer. Surely it’s a boon with essential features at an extraordinary low cost.

Please share your thoughts on same...........

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Tuesday, April 07, 2015

Surgery Assisting Robots From GOogle...!!!!


Beyond its main business of Internet search and advertising, google is becoming more innovative in health care field. You heard it right....  In its latest endeavour the google will develop surgical robots that use artificial intelligence. The google has recently teamed with pharmaceutical giant Johnson & Johnson to build robots that could assist surgeons in the operating theatre.

Google’s life sciences division will be working with Johnson & Johnson’s medical device company, Ethicon, to create a robotics-assisted surgical platform to help doctors in the operating theatre. This will be done by combining Google's expertise with Ethicon knowledge and intellectual property. The project promises on a minimally-invasive surgery, which give surgeons improved accuracy, reducing scarring and trauma significantly. The result could mean faster healing times for anyone receiving invasive surgery.

Google believe that it can enhance the robotic tools using artificial intelligence technologies including machine vision and image analysis software to help surgeons see better during operation or make it easier to get information relevant to surgery. However surgeons will still have ultimate control over what surgical procedures to make, while the platform acts as a supportive tool. The two firms will explore how advanced imaging and sensors could complement surgeons’ abilities, for example by highlighting blood vessels, nerve cells, tumour margins or other important structures that could be hard to discern in tissue by eye or on a screen.

Robot-assisted surgeries aren't a new thing, it has been since 1985 to improve accuracy in operating rooms including in heart, eye and prostate surgery. Robotic surgery works best for operations that require small incisions and high levels of precision. Surgeons typically consult multiple separate screens in the operating room to check preoperative medical images, like MRIs, results of previous surgeries and lab tests, or understand how to navigate an unusual anatomical structure. Google said software could place these images on the same screen that surgeons use to control robotic tools, reducing the need to look away at other screens during the procedures.

Google will be providing software and expertise for data analysis and vision but will not be developing the control mechanisms for the robots. Currently Da Vinci Surgery System, a California-based Intuitive Surgical, dominate the robotic surgery field.

Gary Pruden, who heads the Johnson & Johnson global surgery group, said the collaboration with Google and J&J unit Ethicon "is another important step in our commitment to advancing surgical care, and together, we aim to put the best science, technology and surgical know-how in the hands of medical teams around the world."

So far only announcement is made. The project will have a long research and development phase and it’s not clear at this time when the technology will be used in hospitals, and for which procedures. We can hope google will win the challenge of making robotic surgeries safer and also it could actually make a difference in the medical world…


Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.