Tuesday, December 22, 2009

Hybrid Bike





    I found this news in EFY site ,it's great  if some body develop this kind of things. It will help future generation to grow well.This wonder bike of 120cc capacity is likely to go on sale in May 2010. Bengaluru-based Eko Vehicles has launched the world's first hybrid motorcycle called the ET-120. Developed in collaboration with US-based Emerging Vehicle Technologies, the ET-120 model is expected to go on sale in May 2010. The ET-120 is the first hybrid two-wheeler, that comes fitted with an electric motor and a 70cc petrol engine.


Eko ET-120 will give a fuel consumption figure of 120 kpl, according to the company. It offers maximum speed of 65km/hour.
The vehicle will cost you around Rs 40,000.



I think its great what's your idea about this.....



Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Friday, November 20, 2009

Intel AtomProcessor N270

You has to use this processor for About Intel India Embedded Challenge .

I get some note about this go through its specifications which is given below
The Intel Atom processor N270,implemented in 45nm technology, is power-optimized and delivers robust performance-per-watt for cost-effective embedded solutions. Featuring extended lifecycle support, this processor offers an excellent solution for embedded market segments such as digital signage, interactive clients (kiosks, point-of-sale terminals), thin clients, digital security, residential gateways, print imaging, and commercial and industrial control. The processor remains software compatible with previous 32-bit Intel architecture
and complementary silicon. This single-core processor is validated with the Mobile Intel
945GSE Express chipset, consisting of the Intel 82945GSE Graphics Memory Controller Hub and Intel I/O Controller Hub 7-M. The chipset features power-efficient graphics with an
integrated 32-bit 3D graphics engine based on Intel Graphics Media Accelerator 950 architecture with SDVO, LVDS, CRT, and TV-Out display ports. It provides rich I/O capabilities and flexibility via high-bandwidth interfaces such as PCI Express,* PCI, Serial ATA, and Hi-Speed USB 2.0 connectivity. It also includes a single channel for 400/533 MHz DDR2 system memory (SODIMM or memory down), and Intel High Definition Audio1 interface.


Highlights

Intel Atom processor N270 at 1.6 GHz core speed with 533 MHz AGTL+ front-side bus (FSB) and 2.5 watts thermal design power2 (TDP)

• Intel’s hafnium-based 45nm Hi-k metal gate silicon process technology reduces power consumption, increases switching speed, and significantly increases transistor density over previous 65nm technology

• Hyper-Threading Technology3 (two threads) provides high performance-per-watt efficiency in an in-order pipeline and increased system responsiveness in multi-tasking environments. One execution core is seen as two logical processors, and parallel threads are executed on a single core with shared resources

• Enhanced Intel SpeedStep® Technology reduces average system power consumption

• Enhanced low-power sleep states (C1E, C2E, C4E) are optimized for power by forcibly reducing the performance state of the processor when it enters a package low-power state

• Dynamic L2 cache sizing reduces leakage due to transistor sleep mode

• Intel® Streaming SIMD Extensions (SSE) 2 and Intel® SSE3 enable software to accelerate data processing in specific areas, such as complex arithmetic and video decoding

• FSB lane reversal enables flexible routing

• Execute Disable Bit4 prevents certain classes of malicious “buffer overflow” attacks

• Along with a strong ecosystem of hardware and software vendors, including members of the Intel® Embedded and Communications Alliance (intel.com/go/eca), Intel helps cost-effectively meet development challenges and speed time-to-market

• Embedded lifecycle support protects system investment by enabling extended product availability for embedded customers

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Intel® India Embedded Challenge


I think this is very helpful for all embedded Engg. 
Intel Higher Education Program in collaboration with Embedded and Communication Group (ECG) of Intel Technology India Pvt. Ltd., proudly presents the Intel India Embedded Challenge 2010 - An Embedded design contest for students, interested individuals & entrepreneurs from all over India. This contest has been put forward to inspire the vast technically savvy community in India to architect, design and develop novel embedded applications in areas such as – Consumer electronics, Digital security surveillance, Medical, Storage & others.
The Proposed designs must be based on Intel® Atom™ Processors for embedded computing. Visit this link for more information: http://developer.intel.com/design/intarch/atom/index.htm. There are also range of application reference designs provided by Intel. In order to select the right platform you can also use the step-by-step guide by clicking on http://edc.intel.com/Step-by-Step/
There are two categories of participants – Student & Professional.
The student Category includes engineering students (UG/PG/PhD) and MBA students. Professional category includes all technical professionals interested in embedded. Faculties from academia can either participate in the students category with student teams or participate individually or in team of fellow faculties under the Professional category
As a first step a team of participants consisting of 1-4 individuals must submit the idea/abstract with as much details as possible. Based on the abstract, evaluators will shortlist projects. Once shortlisted, the participating team will be required to submit a detailed abstract. Based on the abstract, the evaluators will shortlist the projects to be considered as finalists.
The shortlisted finalists will get one of the Intel hardware platform free of cost based on their choice or based on evaluation by Intel. They can then continue to work on their prototype for the final contest. Intel India will also offer any guidance and help by assigning mentors for the projects.
The Intellectual Property (IP) rights will remain solely with the participants, with Intel having rights to publish the contest details. It is recommended that the participants take appropriate steps to protect their IP rights. Intel will not be responsible for misuse/loss of IP rights that may arise due to this contest.

Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.

Sunday, May 03, 2009

Recover Any Data From PC


Data recovery :- is the process of salvaging data from damaged, failed, corrupted, or inaccessible secondary storage media when it cannot be accessed normally. Often the data are being salvaged from storage media formats such as hard disk drives, storage tapes, CDs, DVDs, RAID, and other electronics. Recovery may be required due to physical damage to the storage device or logical damage to the file system that prevents it from being mounted by the host operating system.
The most common "data recovery" issue involves an operating system (OS) failure (typically on a single-disk, single-partition, single-OS system), where the goal is to simply copy all wanted files to another disk. This can be easily accomplished with a Live CD, most of which provide a means to 1) mount the system drive, 2) mount and backup disk or media drives, and 3) move the files from the system to the backup with a file manager or optical disc authoring software. Further, such cases can be mitigated by disk partitioning and consistently moving valuable data files to a different partition from the replaceable OS system files.
The second type involves a disk-level failure such as a compromised file system, disk partition, or a hard disk failure —in each of which the data cannot be easily read. Depending on the case, solutions involve repairing the file system, partition table or MBR, or hard disk recovery techniques ranging from software-based recovery of corrupted data to hardware replacement on a physically damaged disk. These last two typically indicate the permanent failure of the disk, thus "recovery" means sufficient repair for a one-time recovery of files.
A third type involves the process of retrieving files that have been deleted from a storage media. Although there is some confusion as to the term, the term "data recovery" may be used to refer to such cases in the context of forensic purposes or spying.

Recovering Data After Physical Damage

        A wide variety of failures can cause physical damage to storage media. CD-ROMs can have their metallic substrate or dye layer scratched off; hard disks can suffer any of several mechanical failures, such as head crashes and failed motors; tapes can simply break. Physical damage always causes at least some data loss, and in many cases the logical structures of the file system are damaged as well. This causes logical damage that must be dealt with before any files can be salvaged from the failed media.

Most physical damage cannot be repaired by end users. For example, opening a hard disk in a normal environment can allow airborne dust to settle on the platter and become caught between the platter and the read/write head, causing new head crashes that further damage the platter and thus compromise the recovery process. Furthermore, end users generally do not have the hardware or technical expertise required to make these repairs. Consequently, costly data recovery companies are often employed to salvage important data. These firms often use "Class 100" / ISO-5 clean room facilities to protect the media while repairs are being made. (Any data recovery firm without a pass certificate of ISO-5 or better will not be accepted by hard drive manufacturers for warranty purposes.)

Recovery Techniques
     Recovering data from physically-damaged hardware can involve multiple techniques. Some damage can be repaired by replacing parts in the hard disk. This alone may make the disk usable, but there may still be logical damage. A specialized disk-imaging procedure is used to recover every readable bit from the surface. Once this image is acquired and saved on a reliable medium, the image can be safely analysed for logical damage and will possibly allow for much of the original file system to be reconstructed.

Hardware Repair





Media that has suffered a catastrophic electronic failure will require data recovery in order to salvage its contents.
Examples of physical recovery procedures are: removing a damaged PCB (printed circuit board) and replacing it with a matching PCB from a healthy drive, performing a live PCB swap (in which the System Area of the HDD is damaged on the target drive which is then instead read from the donor drive, the PCB then disconnected while still under power and transferred to the target drive), read/write head assembly with matching parts from a healthy drive, removing the hard disk platters from the original damaged drive and installing them into a healthy drive, and often a combination of all of these procedures. Some data recovery companies have procedures that are highly technical in nature and are not recommended for an untrained individual. Any of them will almost certainly void the manufacturer's warranty.

Disk imaging




Result of a failed data recovery from a Hard disk drive.
The extracted raw image can be used to reconstruct usable data after any logical damage has been repaired. Once that is complete, the files may be in usable form although recovery is often incomplete.
Open source tools such as DCFLdd v1.3.4-1 or DOS tools such as HDClone can usually recover data from all but the physically-damaged sectors. A 2007 Defense Cyber Crime Institute study shows that the DCFLdd v1.3.4-1 installed on a Linux 2.4 Kernel system produces extra "bad sectors", resulting in the loss of information that is actually available. The study states that when installed on a FreeBSD Kernel system, only the bad sectors are lost. Another tool that can correctly image damaged media is ILook IXImager, a tool available only to government and Law Enforcement.
Typically, Hard Disk Drive data recovery imaging has the following abilities:
 (1) Communicating with the hard drive by bypassing the BIOS and operating system which are very limited in their abilities to deal with drives that have "bad sectors" or take a long time to read. 
(2) Reading data from “bad sectors” rather than skipping them (by using various read commands and ECC to recreate damaged data).
 (3) Handling issues caused by unstable drives, such as resetting/re-powering the drive when it stops responding or skipping sectors that take too long to read (read instability can be caused by minute mechanical wear and other issues). 
 (4) Pre-configuring drives by disabling certain features, such a SMART and G-List re-mapping, to minimize imaging time and the possibility of further drive degradation.

Recovering data after logical damage

Logical damage is primarily caused by power outages that prevent file system structures from being completely written to the storage medium, but problems with hardware (especially RAID controllers) and drivers, as well as system crashes, can have the same effect. The result is that the file system is left in an inconsistent state. This can cause a variety of problems, such as strange behavior (e.g., infinitely recursing directories, drives reporting negative amounts of free space), system crashes, or an actual loss of data. Various programs exist to correct these inconsistencies, and most operating systems come with at least a rudimentary repair tool for their native file systems. Linux, for instance, comes with the fsck utility, Mac OS X has Disk Utility and Microsoft Windows provides chkdsk. Third-party utilities such as The Coroners Toolkit and The Sleuth Kit are also available, and some can produce superior results by recovering data even when the disk cannot be recognized by the operating system's repair utility. Utilities such as TestDisk can be useful for reconstructing corrupted partition tables.
Some kinds of logical damage can be mistakenly attributed to physical damage. For instance, when a hard drive's read/write head begins to click, most end-users will associate this with internal physical damage. This is not always the case, however. Another possibility is that the firmware of the drive or its controller needs to be rebuilt in order to make the data accessible again.

Preventing logical damage

The increased use of journaling file systems, such as NTFS 5.0, ext3, and XFS, is likely to reduce the incidence of logical damage. These file systems can always be "rolled back" to a consistent state, which means that the only data likely to be lost is what was in the drive's cache at the time of the system failure. However, regular system maintenance should still include the use of a consistency checker. This can protect both against bugs in the file system software and latent incompatibilities in the design of the storage hardware. One such incompatibility is the result of the disk controller reporting that file system structures have been saved to the disk when it has not actually occurred. This can often occur if the drive stores data in its write cache, then claims it has been written to the disk. If power is lost, and this data contains file system structures, the file system may be left in an inconsistent state such that the journal itself is damaged or incomplete. One solution to this problem is to use hardware that does not report data as written until it actually is written. Another is using disk controllers equipped with abattery backup so that the waiting data can be written when power is restored. Finally, the entire system can be equipped with a battery backup that may make it possible to keep the system on in such situations, or at least to give enough time to shut down properly.

[edit]
Recovery techniques

Two common techniques used to recover data from logical damage are consistency checking and data carving. While most logical damage can be either repaired or worked around using these two techniques, data recovery software can never guarantee that no data loss will occur. For instance, in the FAT file system, when two files claim to share the same allocation unit ("cross-linked"), data loss for one of the files is essentially guaranteed.

Consistency checking

The first, consistency checking, involves scanning the logical structure of the disk and checking to make sure that it is consistent with its specification. For instance, in most file systems, a directory must have at least two entries: a dot (.) entry that points to itself, and a dot-dot (..) entry that points to its parent. A file system repair program can read each directory and make sure that these entries exist and point to the correct directories. If they do not, an error message can be printed and the problem corrected. Both chkdsk and fsck work in this fashion. This strategy suffers from two major problems. First, if the file system is sufficiently damaged, the consistency check can fail completely. In this case, the repair program may crash trying to deal with the mangled input, or it may not recognize the drive as having a valid file system at all. The second issue that arises is the disregard for data files. If chkdsk finds a data file to be out of place or unexplainable, it may delete the file without asking. This is done so that the operating system may run smoother, but the files deleted are often important user files which cannot be replaced. Similar issues arise when using system restore disks (often provided with proprietary systems like Dell and Compaq), which restore the operating system by removing the previous installation. This problem can often be avoided by installing the operating system on a separate partition from your user data.


Data carving

Data Carving is a data recovery technique that allows for data with no file system allocation information to be extracted by identifying sectors and clusters belonging to the file. Data Carving usually searches through raw sectors looking for specific desired file signatures. The fact that there is no allocation information means that the investigator must specify a block size of data to carve out upon finding a matching file signature. This presents the challenge that the beginning of the file is still present and that there is (depending on how common the file signature is) a risk of many false hits. Also, data carving requires that the files recovered be located in sequential sectors (rather than fragmented) as there is no allocation information to point to fragmented file portions. This method can be time and resource intensive.

Recovering overwritten data

When data have been physically overwritten on a hard disk it is generally assumed that the previous data are no longer possible to recover. In 1996, Peter Gutmann, a respected computer scientist, presented a paper that suggested overwritten data could be recovered through the use ofScanning transmission electron microscopy. In 2001, he presented another paper on a similar topic. Substantial criticism has followed, primarily dealing with the lack of any concrete examples of significant amounts of overwritten data being recovered. To guard against this type of data recovery, he and Colin Plumb designed the Gutmann method, which is used by several disk scrubbing software packages.
Although Gutmann's theory may be correct, there's no practical evidence that overwritten data can be recovered. Moreover, there are good reasons to think that it cannot.

Recovery software

It is very simple method of   Recover Any Data From PC  using a software.

Go to this link to download the Software or CLICK HERE 

http://it---market.googlegroups.com/web/data+recovery.zip?hl=en&gda=FFfC3EUAAACcVsG-OJwow5SmY1BOc327Ov20wtl5JHExVBiqjrakJgitZqlFzJbX3_a2dSbI-1MytiJ-HdGYYcPi_09pl8N7q1mtnwdud3z-aCTF6FZxDA


Please give your suggestion and follow us if you are interested, which encourage us to create new topics for you. And Thankyou for your support.