Facebook  Twitter  Google +  Linkedin

Sony today announced two new Cyber-shot(R) cameras (DSC-TX1 and DSC-WX1 models) that provide unprecedented advances in low-light performance with approximately twice the sensitivity of cameras with traditional image sensors.
Read More

These Cyber-shot cameras are the first to employ Sony's new "Exmor R" back illuminated CMOS sensor technology to improve shooting in low-light scenarios, enhancing image clarity and drastically reducing grain.

"With these new "Exmor R" CMOS sensor cameras, Sony has vastly improved the customer experience for taking pictures with digital still cameras in low-light scenarios," said Phil Lubell, director of the digital imaging business at Sony Electronics. "We've all taken pictures in dimly lit situations, like blowing out candles on a birthday cake, and the results were grainy and unclear. By redesigning the way these cameras capture light, Sony is leading the industry by creating this easy way to take amazingly clear, vibrant photos in low lighting scenarios."

"Exmor R" Sensor Optimizes Low-Light Performance
Conventional image sensor architecture has required wires and other circuit elements to be positioned above the light sensitive photo-diodes, limiting the imager's light gathering capability. Positioning these elements behind the photo-diodes, Sony's "Exmor R" image sensors can gather more light, resulting in approximately twice the sensitivity compared to conventional sensors.

To further extend low-light shooting performance, the TX1 and WX1 cameras incorporate the hand-held twilight and anti-motion blur multi-shot modes introduced in Sony's breakthrough Cyber-shot DSC-HX1. Using "Exmor R" CMOS sensor's high speed, these modes capture six separate images in less than a second and utilize Sony's BIONZ(TM) processor to combine the shots into a single image of extraordinary detail and low noise.

Combining the "Exmor R" technology with hand-held twilight and anti-motion blur modes delivers a breakthrough in low-light photography. Users can now capture images of stunning detail and low noise in scenes with no more than candlelight--without flash or the need of a tripod.

Innovative Sweep Panorama(TM) and High-Speed Shooting
In addition to their breakthrough low light performance, these new cameras also include Sony's Sweep Panorama and 10 frames per second burst shooting features, which were introduced with the Sony(R) DSC-HX1 camera. The TX1 and WX1 cameras offer these features in smaller, more compact bodies that match nearly any unique style.

Capturing wide landscapes is as easy as "press and sweep." Sweep Panorama mode lets you reach beyond the traditional wide-angle lens and capture breathtaking shots. Using the high-speed "Exmor R" CMOS sensor, the cameras shoot continuously while you sweep across the scene. Using the BIONZ imaging processor, they automatically stitch the pictures together to create one stunning panoramic photo.

The TX1 and WX1 Cyber-shot models can take up to 185 and 256-degree panorama shots respectively in one easy press-and-sweep motion with an image size of 7152 x 1080 (ultra wide horizontal).

Advanced Technology and Compact Design
While the HX1 camera is a well-rounded solution for customers who are looking for high-zoom and speed in a smaller size than a DSLR, the TX1 and WX1 cameras are made for an audience that wants advanced technology in an even more compact design.

With its slim profile of just 16.5mm, the 10.2 mega-pixel TX1 offers streamlined, distinguished curves for a sophisticated look appealing to the fashion-oriented who are also looking for great performance. This model features a new operation on the touch panel that lets you scroll through images with an effortless "flick" of your finger and directly access menus on the 3-inch Clear Photo LCD Plus(TM) display.

With a Carl Zeiss(R) Vario-Tessar(R) lens, the TX1 camera lets you focus as little as 0.4 inches from your subject for extraordinary close-up shots. The 4x telescopic zoom is perfect for capturing far-away subjects, and Sony's Optical SteadyShot(TM) image stabilization helps overcome camera shake.

The 10.2 mega-pixel WX1 camera has a 2.7-inch Clear Photo LCD Plus display and is just over three quarters of an inch thin--an ideal choice for DSLR owners who also want to carry a compact, high performance digital still camera.

The WX1 camera features a Sony G lens with an extraordinary wide angle 24-120mm 5x optical zoom. This lens' f/2.4 maximum aperture offers nearly twice the light gathering capability of conventional lenses, and works together with the "Exmor R" imager and low-light shooting modes to provide low-light photography beyond the abilities of other compact cameras.

Tech Savvy Cameras
These cameras include the most recent Sony technology, including, Intelligent Auto (iAuto) mode which, recognizes scenes, lighting conditions and faces, and adjusts settings resulting in clearer images, faces with more natural skin tone and less blur; Face Detection that detects up to eight faces and optimizes focus, flash, exposure and white balance and intelligent Scene (iSCN) that delivers nine Scene Selection modes to quickly adjust for specific shooting conditions.

Pet mode is a new Sony feature that minimizes blur when shooting moving pets. This new mode also reduces glowing pet red-eye.

Additionally, the cameras have technologies Sony Cyber-shot customers have come to expect. These include Smile Shutter(TM) technology that automatically captures a smile, dynamic range optimization (DRO) that improves exposure and contrast, intelligent Auto Focus that captures fleeting moments and HD video capability that records HD movies in 720p high definition MPEG4 format.

With HD video capability, these cameras record HD movies in 720p high definition MPEG4 format for stunning large-screen home movie playback. You can record up to 29 minutes (or up to 2GB file size) in 720p format.

Pricing and Availability
The TX1 camera will be available in silver, gray, pink and blue this September for about $380. The WX1 camera will be available in black this October for about $350. Pre-sales will start in August. The cameras and a range of accessories will be available online at sonystyle.com, at Sony Style(R) retail stores (www.sonystyle.com/retail), at military base exchanges and at authorized dealers nationwide.

source: PRNewswire

A submarine communications cable is a cable laid beneath the sea to carry telecommunications between countries, continents, islands.

The first submarine communications cables carried telegraphy traffic. Subsequent generations of cables carried first telephony traffic, then data communications traffic. All modern cables use optical fiber technology to carry digital payloads, which are then used to carry telephone traffic as well as Internet and private data traffic. They are typically 69 millimetres (2.7 in) in diameter and weigh around 10 kilograms per meter (7 lb/ft), although thinner and lighter cables are used for deep-water sections.


Read More As of 2003, submarine cables link all the world's continents except Antarctica.

After William Cooke and Charles Wheatstone had introduced their working telegraph in 1839, the idea of a submarine line across the Atlantic Ocean began to be thought of as a possible triumph of the future. Samuel Morse proclaimed his faith in it as early as the year 1840, and in 1842 he submerged a wire, insulated with tarred hemp and India rubber,[2][3] in the water of New York harbour, and telegraphed through it. The following autumn Wheatstone performed a similar experiment in Swansea bay. A good insulator to cover the wire and prevent the electric current from leaking into the water was necessary for the success of a long submarine line. India rubber had been tried by Moritz von Jacobi, the Prussian electrical engineer, as far back as the early 1800s.

Submarine cables are laid using special cable layer shipsAnother insulating gum which could be melted by heat and readily applied to wire made its appearance in 1842. Gutta-percha, the adhesive juice of the Palaquium gutta tree, was introduced to Europe by William Montgomerie, a Scottish surgeon in the service of the British East India Company. Twenty years earlier he had seen whips made of it in Singapore, and he believed that it would be useful in the fabrication of surgical apparatuses. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845 the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. It was tried on a wire laid across the Rhine between Deutz and Cologne. In 1849 C.V. Walker, electrician to the South Eastern Railway, submerged a wire coated with it, or, as it is technically called, a gutta-percha core, along the coast off Dover.

The first commercial cables
In August 1850, John Watkins Brett's Anglo-French Telegraph Company laid the first line across the English Channel. It was simply a copper wire coated with gutta-percha, without any other protection. The experiment served to keep alive the concession, and the next year, on November 13, 1851, a protected core, or true cable, was laid from a government hulk, the Blazer, which was towed across the Channel. The next year, Great Britain and Ireland were linked together. In 1852, a cable laid by the Submarine Telegraph Company linked London to Paris for the first time. In May, 1853, England was joined to the Netherlands by a cable across the North Sea, from Orford Ness to The Hague. It was laid by the Monarch, a paddle steamer which had been fitted for the work.

Transatlantic telegraph cable
The first attempt at laying a transatlantic telegraph cable was promoted by Cyrus West Field, who persuaded British industrialists to fund and lay one in 1858. However, the technology of the day was not capable of supporting the project, it was plagued with problems from the outset, and was in operation for only a month. Subsequent attempts in 1865 and 1866 with the world's largest steamship, the SS Great Eastern, used a more advanced technology and produced the first successful transatlantic cable. The Great Eastern later went on to lay the first cable reaching to India from Aden, Yemen, in 1870.

Submarine cable across the Pacific
This was completed in 1902–03, linking the US mainland to Hawaii in 1902 and Guam to the Philippines in 1903. Canada, Australia, New Zealand and Fiji were also linked in 1902.

The North Pacific Cable system was the first regenerative (repeatered) system to completely cross the Pacific from the US mainland to Japan. The US portion of NPC was manufactured in Portland, Oregon, from 1989–1991 at STC Submarine Systems, and later Alcatel Submarine Networks. The system was laid by Cable & Wireless Marine on the CS Cable Venture in 1991.

Transatlantic cables of the 19th century consisted of an outer layer of iron and later steel wire, wrapping India rubber, wrapping gutta-percha, which surrounded a multi-stranded copper wire at the core. The portions closest to each shore landing had additional protective armor wires. Gutta-percha, a natural polymer similar to rubber, had nearly ideal properties for insulating submarine cables, with the exception of a rather high dielectric constant which made cable capacitance high. Gutta-percha was not replaced as a cable insulation until polyethylene was introduced in the 1930s. In the 1920s, the American military experimented with rubber-insulated cables as an alternative to gutta-percha, since American interests controlled significant supplies of rubber but no gutta-percha manufacturers.

Bandwidth problems
Early long-distance submarine telegraph cables exhibited formidable electrical problems. Unlike modern cables, the technology of the 19th century did not allow for in-line repeater amplifiers in the cable. Large voltages were used to attempt to overcome the electrical resistance of their tremendous length but the cables' distributed capacitance and inductance combined to distort the telegraph pulses in the line, severely limiting the data rate for telegraph operation. Thus, the cables had very limited bandwidth.

As early as 1823,[citation needed] Francis Ronalds had observed that electric signals were retarded in passing through an insulated wire or core laid underground, and the same effect was noticed by Latimer Clark (1853) on cores immersed in water, and particularly on the lengthy cable between England and The Hague. Michael Faraday showed that the effect was caused by capacitance between the wire and the earth (or water) surrounding it. Faraday had noted that when a wire is charged from a battery (for example when pressing a telegraph key), the electric charge in the wire induces an opposite charge in the water as it travels along. As the two charges attract each other, the exciting charge is retarded. The core acts as a capacitor distributed along the length of the cable which, coupled with the resistance and inductance of the cable limits the speed at which a signal travels through the conductor of the cable.

Early cable designs failed to analyze these effects correctly. Famously, E.O.W. Whitehouse had dismissed the problems and insisted that a transatlantic cable was feasible. When he subsequently became electrician of the Atlantic Telegraph Company he became involved in a public dispute with William Thomson. Whitehouse believed that, with enough voltage, any cable could be driven. Because of the excessive voltages recommended by Whitehouse, Cyrus West Field's first transatlantic cable never worked reliably, and eventually short circuited to the ocean when Whitehouse increased the voltage beyond the cable design limit.

Thomson designed a complex electric-field generator that minimized current by resonating the cable, and a sensitive light-beam mirror galvanometer for detecting the faint telegraph signals. Thomson became wealthy on the royalties of these, and several related inventions. Thomson was elevated to Lord Kelvin for his contributions in this area, chiefly an accurate mathematical model of the cable, which permitted design of the equipment for accurate telegraphy. The effects of atmospheric electricity and the geomagnetic field on submarine cables also motivated many of the early polar expeditions.

Thomson had produced a mathematical analysis of propagation of electrical signals into telegraph cables based on their capacitance and resistance, but since long submarine cables operated at slow rates, he did not include the effects of inductance. By the 1890s, Oliver Heaviside had produced the modern general form of the telegrapher's equations which included the effects of inductance and which were essential to extending the theory of transmission lines to higher frequencies required for high-speed data and voice.

Source: Wikipedia

Toshiba just announced today they achieved the development of the all new SDXC card capable to provide up to 64GB of memory.
According to the Japanese manufacturer, the first sample will be available on November, while the official launch is scheduled for the next spring 2010. Our new memory card, featuring 64GB of storage space with the exFAT file system, will provide a writing speed of 35MB/s, and a reading speed of 60MB/s.


Read More
Besides this SDXC card, Toshiba also unveiled new FAT32 32GB and 16 GB SDHC cards, both providing Hi-speed transfer rate (writing speed of 35MB/s, and a reading speed of 60MB/s). These two cards will also be available next Spring 2010.

TOKYO and IRVINE, Calif., Aug. 3 -- Toshiba Corporation, a leading innovator in NAND flash memory technologies and solutions, and Toshiba America Electronic Components, Inc. (TAEC), a North American subsidiary, today announced the launch of the world's first 64GB(1) SDXC Memory Card(2) capable of operating at the world's fastest data transfer rate(3) for reading and writing to a flash memory card. The new card is compliant with the new SD Memory Standard, Ver. 3.00, UHS104. Toshiba also extended its industry leadership in memory card solutions by unveiling 32GB and 16GB SDHC Memory Cards compliant with the world's fastest data transfer rate. Samples of the new SDXC Memory Cards will be available this November, and samples of the new SDHC Memory Cards will be available in December.

The new SDXC and SDHC Memory Cards are the world's first memory cards compliant with the SD Memory Card Standard Version 3.00, UHS104, which brings a new level of ultra-fast read and write speeds to NAND flash based memory cards: a maximum write speed of 35MB(4) per second, and a read speed of 60MB per second. The combination of large storage capacities and increased data transfer rates will meet the needs of a wide range of consumer electronics applications such as digital still cameras and digital camcorders that require high bandwidth data communication. For example, digital SLR cameras will be able to shoot longer continuous bursts in the highest quality RAW format. Similarly, with these cards, it will be possible to download a 2.4GB video in only 70 seconds.

The SDXC card is the next-generation standard defined by the SD Association in January 2009. The new standard applies to cards with capacities over 32GB and up to 2TB, compared to the SDHC standard, which applies to cards with capacities over 2GB and up to 32GB.

The high level specifications and wide range of memory cards announced by Toshiba will further open the way for developers to bring applications to future generations of consumer products. By further enhancing its SD Memory Card lineups with larger capacity and a higher data transfer rate, Toshiba will continue to meet market demand and to lead the NAND flash memory market.

(1) SDXC Memory Card realizes the 64GB capacity, the largest capacity yet available in the market.

(2) Supports UHS104, a new ultra high speed interface in the new SD Memory Card Standard Ver. 3.00, which provides 104MB per second bus speed on the SD interface, and realizes maximum write speed of 35MB per sec., with a read speed of 60MB per sec.

(3) UHS104 provides the conventional SD interfaces: 3.3V DS (25MHz)/ HS (50MHz) and new SDHC Memory Cards with UHS104 are interoperable with existing SDHC host devices.

(4) Integrates highly secure CPRM copy protection technology.

Projectors are now considered as the best media to share information through presentations. To cater to different needs of the people different types of Projectors are available in the market with advanced features. The different types of projectors are:-

Read More
Desktop Projectors:
these projectors are handy and portable. As the name indicates it can fit on the desktop. A good quality desktop projector has superb features with changeable lenses, zoom capabilities, stereo sound and high definition pictures.

Home theatre projectors: this projector comes with digital units that can be connected to all the audiovisual units at home. They are portable and have the features of S-video, composite connectors, component and built in speaker.

Overhead Projectors: this projector is most popular and best too to display projected images. The portable one are made use in schools and business purpose.

Installation Projectors: these projectors are made use of showing images in large venues like conference halls, museums and auditoriums. They are remarkable with its capability to project an image up to 25 feet. A good installation projector features zoom capabilities, high resolution reproduction, interchangeable lens capabilities, tilt features, stereo speakers, a quiet cooling fan, onscreen display, a remote control and mouse.

Slide projectors: they are very popular since few decades and are still in use. They are popular devices that project photographic slides. While, the latest version in slide projectors come with a pop up screen for instant portable viewing.

Opaque Projectors: they are the oldest type designed to be used as an artist’s enlarging device. This projector helped the artists to trace the projected images and transferred it into the canvas.

Source: http://www.saveonprojectors.com

Until now, most consumer PCs have run on software from one of two companies: Microsoft or Apple.
But on Wednesday, search giant Google (GOOG) shook up the computing world by formally announcing plans to compete head-to-head against those companies on their home turf: PC operating systems.

Google gave notice that it's developing its own PC operating system initially targeted at netbooks, those pint-size, inexpensive PCs currently selling like hot cakes. Google is meeting with hardware manufacturers and hopes to have it on computers by the second half of 2010.

Read More

Google's goal is to be the opposite of today's operating systems — especially Windows, which commands 90% of the market. The ubiquitous software has a reputation as virus-prone and complicated. Google says its Google Chrome Operating System will be faster, smoother and lightweight.

An outgrowth of Google's Chrome Internet browser, the OS is designed "to start up and get you onto the Web in a few seconds," Google said in its official blog post announcing the product. Google says it can achieve that by building a system from the ground up, one that isn't constrained by working with a legacy system initially built in the 1980s.

Now, all it must to do is execute.

Unlike Windows, Chrome is an open-source project like the Linux operating system that's popular with techies, which means outside software developers are welcome to work on it. And Google believes developers who have a stake in the project will find a way to bring Chrome to a wide variety of PCs quickly, says a person with direct knowledge of Google's intentions, who isn't authorized to speak on behalf of the company.

Love it or hate it, Microsoft (MSFT) sells some 400 million copies of Windows annually. PC manufacturers —Dell (DELL), Hewlett-Packard (HPQ), Lenovo, Acer, Toshiba and more — offer Windows on most PCs. When Microsoft comes out with a new operating system — as it will in the fall with Windows 7 — that's what most consumers get when they purchase a new PC. Microsoft declined to comment on Google's announcement.

But Google's operating system will be free, compared with the average $45 per machine manufacturers pay for Windows.

"Microsoft has a real problem," says Charles Wolf, an analyst at Needham & Co. "HP can now say to Microsoft, 'We've got a great operating system (Google) that doesn't cost us anything — what are you going to do about it?' " Linux, too, is sometimes free, but it can be hard to use.

Still, for consumers, "The learning process of any change is so substantial, most people will resist it, unless Google can really show a compelling reason," says Phil Leigh, an analyst for Inside Digital Media. "Most will stick with what they know."

The battle is on

Google has been locked in a battle with Microsoft for years.
Microsoft urges consumers to use its MSN.com as a home page on the Web, to make its new Bing (formerly Live) their search engine of choice, and to use its Internet Explorer browser, effectively bypassing Google.

Google — the most visited website worldwide — countered last year with Chrome, its own browser. It says some 30 million people are using it now.

Don't like Microsoft's Office software? Google offers online word processing, spreadsheet and presentation programs that are free.

Microsoft, which has been trying to catch up to Google's dominance of search advertising (5.5% vs. 72% market share in April, according to Hitwise), recently launched Bing, a well-received search overhaul that's been advertised heavily.

In reaching for Microsoft's cash cow, the operating system, analysts say, Google is in for a tough haul.

"Google will find that it's much harder than it looks," says Roger Kay, president of Endpoint Technologies Associates. "There're all those drivers and devices that have to be supported."

Microsoft has huge customer service departments. As anyone who's ever tried to contact Google knows, there are no customer service reps to call on the phone.

Microsoft is unaccustomed to having operating system competitors, but Kay says "it will do whatever it can to fight back."

Chrome isn't Google's first operating system. With more consumers conducting searches on mobile devices, Google launched Android, an operating system for phones.

The clash of tech titans has rekindled questions about whether either has what it takes to diversify beyond their respective core businesses. Microsoft, for instance, continues to derive some 80% of its revenue from selling the Windows operating system and Office suite; this despite pouring billions into search advertising, online services, video games and other businesses.

Similarly, Google gets 97% of its revenue from online advertising, despite multiple attempts to diversify with Google Apps, instant messaging, photo-editing software and Android.

Android could get to a netbook before Chrome does: Upstart PC maker Acer announced in June that it would begin selling Google netbooks in October based on Android. Acer declined to comment for this story.

Taking it online

Trip Chowdhry, an analyst at Global Equities Research, says Google will begin getting netbook customers by targeting the 60 million users of its Gmail e-mail service. "The influencing power will be on the company that can provide a branded and exceptional online experience."

Microsoft sells 400 million copies of Windows yearly. "Can that 400 million become 800 million?" asks Chowdhry. "Not likely. Can Google's 60 million grow to 1 billion? Yes."

Matt Rosoff, an analyst at research firm Directions on Microsoft, says Google will need to counter Microsoft's strong marketing and consumer support with efforts of its own.

"It will need to devote serious marketing resources to explain to average consumers, not just tech enthusiasts, why they'd want this new OS," he says.

Analysts see Apple (AAPL) getting hurt by Google's challenge, as well.
A Google netbook at $300 would be $700 less than Apple's current entry-level laptop, the $999 MacBook.

"The growth in the market is coming from netbooks, and Apple's been missing that," says Gene Munster, an analyst at Piper Jaffray. "We believe Apple will respond with a netbook in the first quarter of next year — but it will be more expensive than Google's."

Meanwhile, Microsoft is going to be anything but quiet this year. It can fall back on deep relationships with software developers and retailers. And it will almost certainly tweak pricing and features of Windows 7 to compete, Rosoff says.