Untitled - Consumer Electronics Association


Untitled - Consumer Electronics Association









©Consumer Electronics Association 2009




Welcome to the latest edition of Five Technology

Trends to Watch. This annual Consumer Electronics

Association (CEA) publication looks at the new

technologies that will shape our future. I remain optimistic despite

the challenges to the economy. The consumer technology industry

continues to show promise with sales expected to reach $172

billion for 2009.

This year we look at the evolution of content, connected devices

in the home, TV beyond HD, connected cars and the smart grid.

The publication also takes a peek at the future of CE. For example,

IBM is working to develop artificial DNA nanostructures as a

framework to build the tiny microchips used in electronics devices.

Although still many years out, this work could one day impact how

we build, operate and interact with electronics.

Learn also about advances in a holographic storage material

capable of storing 500GB of data on a DVD-sized optical disc – ten

times the amount that can be stored on a dual-layer Blu-ray disc.

It’s not here yet but discs of this size could one day store 3D video.

I hope that Five Technology Trends to Watch increases your interest

to learn more about the technologies that continue to improve

our lives. What better place to learn than at the 2010 International

CES in Las Vegas, Nev. from January 7-10 Come to CES to see

the latest products and services from more than 2,000 exhibitors

including 300 who are brand new to the show.

CES includes more than 200 conference sessions that feature

600 technology experts. The focus is on product categories like

digital entertainment, wireless, digital imaging, computing and

networking, audio, video, in-vehicle technology and accessories for

the home, office and road.

To see award-winning technology, check out the Innovations

Design and Engineering Awards showcase that honors achievement

in product and engineering design and the winner of the i-stage

competition held at CEA’s Industry Forum who will have a booth

at CES.

Come to CES and see how innovation is advancing technology. For

more information, visit www.CESweb.org. See you at the show!

Gary Shapiro

President and CEO



The 2009 protests in Iran became a leading news story

across the globe earlier this summer. While the outcome of

this uprising continues to unfold, there is a simultaneous

development now inextricably linked with the event. As conditions

within Iran deteriorated in the wake of the disputed election, the

flow of reliable information was hampered once the government

effectively shut down the Internet (disabling access to the servers).

At the same time, most “major” news networks were unable, for a

variety of logistical reasons, to penetrate the officially constructed

wall of secrecy.

Enter Twitter. Protesters inside Iran used Twitter to update and

organize, while people outside the country shared and promoted

these messages, spreading awareness in an unprecedented fashion.

Then, in the midst of this historic information exchange, the

service announced its intention to temporarily suspend service in

order to complete some systemic upgrades.

Enter Obama. As the White House subsequently confirmed,

the administration formally requested that Twitter postpone its

upgrade to enable protesters uninterrupted access. “It appears

Twitter is playing an important role at a crucial time in Iran,”

noted White House spokesman Robert Gibbs. This progression of

events, needless to say, has ramifications far beyond Iran and the

political interstices of the world arena – beyond even the World

Wide Web. Notice was served that the dissemination (and control)

of information will be increasingly difficult, if not impossible,

to suppress or merely regulate. The so-called “mainstream”

news outlets’ inability to keep up with the story has exposed the

limitations of traditional reporting and spotlights the transition to

an audacious new world. If the Internet succeeded in making the

concept of news a 24/7 proposition, Twitter has created a 10,080

second-per-week reality: there is now a virtually ceaseless influx of

updates available for anyone with broadband access.


As is often the case, we can often predict the future by considering

the past. In regards to the evolution of content (itself a loaded

and difficult concept to define), it is instructive to consider the

radical reconfiguration of the music industry. The ways in which

consumers are purchasing, sharing and even listening to music

may serve as a case study of sorts. While it is unlikely that other

forms of content will endure the same sort of initial growing pains,

they also cannot be expected – for a variety of reasons – to witness

the swiftness with which the market transformed.

Everyone remembers the way this played out: Less than ten years

ago the compact disc was still king: cars that had come equipped

with cassette players (in fact, into the late ‘80s the installation

of a tape player was often a moderately expensive aftermarket

enhancement to the standard sound system) for decades began

upgrading to CD players and changers, and cassettes ultimately

went the way of the 8-track. In stores and online, compact discs

were the gold standard, and while the price point dipped every

couple of years, it still cost a considerable chunk of change to

procure the same content formerly stored in plastic cases for less

than $10.

The Internet, obviously, changed everything. As we entered a

new millennium, suddenly sites (invariably unauthorized ones)

surfaced, enabling consumers to download music files for free.

This phenomenon quickly captured the attention of record labels

and even artists (remember the Metallica vs. Napster imbroglio),

but most everyone ultimately came to understand that the digital

genie could never be placed back into the bottle. Quality and

consistency of sound were recurrent issues, but people accustomed

to shelling out $15 for a single compact disc proved more than

willing to sacrifice some fidelity in order to embrace the notion of

free music.

Enter Apple. As passé as it sounds today, that digital content

miracle also known as the iPod initially seemed like a space-age,

outrageously expensive upgrade to the Walkman. There are myriad

reasons for iPod’s astonishing success, but a key component of

its appeal was the alternatives it offered mainstream (or, old

school) consumers. For instance, despite the initial expense of

the hardware, the concept of purchasing a single song for $0.99

was as liberating as it was progressive. The older generation had

resignedly accepted the notion of purchasing an entire compact

disc even if there were only one or two desirable songs. Suddenly,

the consumer had the power to pick and choose, and the industry’s

previously impregnable monopoly was permanently shattered.

Enter MySpace (and Facebook). Social networking sites have

greatly accelerated the democratization of content, and now

musicians are able to promote (and sell) their material directly

to their audience. Ten years ago, people went online to discuss

music; now they can buy music, listen to (free) music and, of

course, discuss music. Jason Herskowitz, vice president of product

management at LimeWire (the world’s most popular peer-topeer

file-sharing program) has been directly involved in the

machinations of the music industry for almost twenty years and



even he admits being surprised by the success not only of social

networking sites in general, but specifically the ways they facilitate

curiosity and demand. “If you told someone five years ago that

Facebook would be the fourth most visited site on the Web, they

probably would have said Who”

Of course the music industry, by taking so long to see the writing

on the wall, squandered valuable time to adapt and innovate.

In the meantime, content sites have moved in and now serve as

destinations where fans can gain exposure to more (and often

free) music, and musicians gain exposure to a diverse fan base.

“These sites definitely benefit from music content, and vice versa,”

Herskowitz explains. “MySpace initially featured a large number

of unsigned or unknown bands, and it provided an ideal forum

for artists looking for a tool to promote their work. Today, the

site includes major label content and is spawning a separate

company altogether, called MySpace Music – an endeavor that is a

collaboration between MySpace and the big record labels.” From

Napster to MySpace, everything about music – from creation to

marketing to distribution – has come almost full circle, albeit in

a way that fully embraces the technological advances digitized

content enabled.

Atlantic Records recently announced that more than fifty percent

of its total U.S. music sales come from digital products (e.g., music

downloads and cell phone ring tones). The predicted decline of

the CD is happening faster than most people could have imagined,

and the prospects for a hastened demise are increasingly obvious.

More worrisome is the overall market, which continues to shrink

year-over-year. According to the Recording Industry Association

of America (RIAA), total U.S. music sales in 1999 generated

more than $14 billion, and expectations are that revenues will be

under $10 billion in less than five years. Put simply, even as digital

content replaces the old paradigm, the younger demographic

weaned on the expectation of free music appears reluctant to pay

for something they can easily (if illegally) obtain at no cost.

Indeed, the very prospect of opening one’s wallet for content is,

in some regards, an antiquated concept. This puts the industry

and, to a lesser extent artists, in something of a bind. Or, it presents

an opportunity, born of necessity, to not only think outside the

proverbial box, but to blow it up.

Enter Pandora. The Internet music service founded in 2000 has

become one of the more widely known, free “music discovery”

sites. Pandora empowers users to create their own “stations”

based on songs by favorite artists, or have similar songs/

musicians recommended. The enticement for the young (and

older) demographic is obvious: easy exposure to music, and the

opportunity to explore new artists with a minimal investment of

time or effort. Various subscription levels are available that provide

virtually unlimited access to the site’s services (the no-cost option

has a 40-hour per month limit, for instance).

Founder and Chief Strategy Officer Tim Westergren believes that

the trend of more musicians making music available for free does

not necessarily threaten the viability of potential subscription

services. “In theory, yes – if you could get it for free, why would

you pay for it (On the other hand) it seems fundamentally true

that… the motivation to subscribe has a lot more to do with wellknown

music, popular music. That stuff is not going to be free

anytime soon.” This naturally underscores the distinction between

lesser-known bands eagerly making their music available and the

more popular music, which is likely to retain its premium price

point for the foreseeable future.

Westergren also embraces the undeniable success of social

networking sites, indicating that they can be utilized strategically.

“Social networking is fundamentally good because it’s getting

people involved,” he says. “We have benefitted greatly from social

networking in terms of people spreading the word about Pandora.

We have a Facebook and MySpace account, we have a Twitter

account and that’s where a lot of people learn about things like




The specific circumstances are not identical, but the fate of

newspapers is following a pattern the music industry was forced to

confront. Increasingly, there are panicked cries declaring the death

of print media, as more newspapers and magazines cease to exist.

Not all that long ago blogs were dismissed (often by the very

folks now finding themselves caught up in cutbacks at shrinking

newspapers), but have developed into a viable – and profitable

– alternative to traditional media. The same principle applies to

readers of newspapers and magazines: if content can be found

online for free, who is going to pay for it (The reason this content

is even available online is because once the balance of power in

terms of readership transferred, the mainstream outlets followed

the advertising dollars.) Today, writers with popular blogs are

making the type of money from advertisers that newspapers and

magazines once took for granted. Advertising sales are now being

aggressively generated at targeted sites like Craigslist and the

aforementioned social networking sites.

Newspapers continue to fold, and those able to survive are

figuring out that some readership is preferable to none. As such,

online advertising will increasingly constitute the lifeblood of

these publications. Most newspapers have made their complete

daily content available, at no cost, online. Today the debate rages

about whether the same audience who has grown accustomed to

free music will ever pay to read the news (if they read the news

from a mainstream outlet in the first place). Despite the recent

proclamation by Rupert Murdoch that he intends to charge for

content, he – and others reluctant to reconfigure their business

models – will likely relive the pain the music industry suffered.


Chris Anderson, writer and editor-in-chief of Wired magazine,

caused a stir with the publication of The Long Tail, a book that

examined the more successful strategies employed by online

giants such as Amazon.com and Netflix. His recent follow-up,

provocatively entitled Free, proposes that businesses actually can

(and do) generate more profit by giving away content. Naturally,

this is a complicated notion and Anderson’s thesis has attracted

admirers as well as detractors (the most notable being Malcolm

Gladwell, author of the bestseller The Tipping Point and no

stranger to fanfare and controversy himself). In Free, Anderson

proposes “in the digital realm you can try to keep free at bay with

laws and locks, but eventually the force of economic gravity will

win.” This assertion underlies the premise that musicians benefit

by giving away their music for free, or the unpaid writers who blog

for The Huffington Post gain more from the considerable exposure

the site affords them. Suffice it to say, there are indisputable

elements of truth in either scenario: the readers a writer can reach

via a popular website dwarves the audience they might attract

to their own blog. On the other hand, the somewhat hysterical

warnings about the “death of journalism” become a little more

prescient when one considers the unwillingness – or, projecting

out several years, the inability of publications to pay writers for

their services.

The question then arises: is free content not only the preferable

business strategy, but the inevitable one As recent history has

revealed, copyright laws can only keep the pirates at the gate for so

long, and the concept of simple sharing seems practically passé at

this point. The strategies Anderson discusses illustrate a reaction

as much as an action on the parts of the content providers: the

writing is already on the wall, and if the relative economic surplus

of recent years has gone away for good, the stakes are painfully

high for companies trying to remain profitable. The lesson being

learned might be described as a revision of the old cliché: If you

can’t beat them, have them join you. Of course this concept of

free-for-all entertainment addresses a specific, not indefinitely

sustainable period in time. If and when revenue from advertising

dries up, it’s no longer a matter of free content, as the availability

of any content.

Netflix is one company actively seeking – and quite possibly

finding – a solution that balances increased exposure and

profitability. For example, subscribers to their service can now

access more than 12,000 titles, which are streamed directly to the

user’s PC, at no extra charge. Subscribers can also pay a nominal

one-time cost to get the same 12,000 titles instantly available

on their TVs via its Roku digital video player, or the popular

Xbox 360. One wonders if the current fee of $99 will eventually

disappear as Netflix seeks to solicit more customers. Certainly, with

the economic woes having a deleterious effect on the marketplace,

pressure has never been greater for content providers to attract

customers literally by any means necessary.

and YouTube were the primary destinations for online video

content, mostly via brief video clips uploaded by other users.

Now, sites like Hulu (which utilizes advertising revenue to avoid

charging for its content) are attracting viewers to watch full-length

features. This trend is not likely to threaten TV sales or take away

from home viewing, but it speaks to the new and cost-free options

consumers can take advantage of.

“ If you told someone five years ago that

Facebook would be the fourth most visited

site on the Web, they probably would have

said Who”


Most of the content discussed to this point has always been in

some state of transition. Movies, for instance, were silent, then

shown on public screens, then available on private screens (TVs),

and now they can be viewed on PCs and smartphones. Music went

from vinyl to reel-to-reel to digital, with the hardware constantly

becoming smaller to the point where a device holding thousands

of songs can now fit snugly in your front pocket. Games have

followed a similar course: from cardboard table-sized offerings to

free wireless programs that can be played simultaneously by people

in different area codes. Books, on the other hand, have remained

virtually unchanged since their inception. They have been refined

to accommodate ease-of-use and their means of production

have advanced considerably, but a book has remained a bound

and printed product, read the same way as it was six centuries

ago. Thus, the rather recent development of electronic media

(specifically e-books and their associated hardware, or e-readers)

signals another paradigm shift.


It is an intriguing commentary on the plethora of options

presently available that we are seeing the enormous success of

flat-panel displays, yet we are also hearing about the increasing

popularity of Internet-based content. Initially, sites like MySpace

Indeed, where the transformation of music and movies has been

underway for some time, the big battle of the immediate future is

the ostensible decline of book sales once e-books gain mainstream

traction. One important distinction is that the book publishing

industry, beneficiaries of the hard lessons learned by the music

business, has already embraced these inevitable developments


and continues to plan accordingly. The prediction, therefore, that

books will follow CDs to the tar pit is unlikely to reach fruition

anytime soon: paperbacks, especially used copies, are significantly

less expensive than CDs ever were. As such, physical texts are

affordable and entrenched, and will not become extinct in our

lifetimes. Books, as their history illustrates, represent arguably

the most adaptable and user-friendly form of entertainment ever

created. Put simply, people love books and savor the experience of

reading text; the somewhat recent portability of music is a solution

books never needed to address. On the other hand, the initial

success of e-books owes more to a refinement of the experience as

opposed to an obliteration of it.

Jeff Kleinman, co-founder of Folio Literary Management, has been

intimately involved in the publishing industry for two decades

and welcomes the advances e-books are making possible for both

writers and readers. When asked if he can envision traditional

books disappearing in this generation or the next, he looks to the

past to anticipate the future. “(Printed) books going away seems

doubtful to me, but they seem to be going the way that illustrated

manuscripts went, when the printing press came out,” he observes.

“All of a sudden there was a much cheaper, more popular way of

having access to a book’s content. We’re not even near where things

will be moving in the future. Most insiders are realizing that the

e-book industry is pretty much the only sector of the marketplace

that is growing.”

CEA expects sales of e-books to accelerate considerably over the

next five years. Total unit shipments should reach almost 1.2

million by the end of 2009, representing a more than one hundred

percent increase versus last year. Double-digit unit shipment

growth is expected through 2013. The picture for total expected

revenue is correspondingly bright: e-books look to generate $317

million by the end of 2009 (up over one hundred percent from

2008) and double-digit growth is anticipated over the next five

years, culminating in over $21 billion in 2013.

Another trend to keep a close eye on is how e-books will impact

academia. While it’s improbable to envision paperback books

disappearing, it’s much easier to see how bulky and costly

textbooks might slowly phase out in favor of PC (or eventually,

e-reader) accessible content. McGraw-Hill Education has already

announced that they are making a number of college textbooks

available for use via Kindle. CourseSmart LLC, which has a catalog

of more than 7,000 e-textbooks, is now available for the iPhone

and iPod Touch.

The younger demographic raised on PCs and portable devices

is increasingly likely to engage with the dynamic capabilities of

e-text, and may find textbooks old-fashioned. In addition to the

green-friendly aspects of this conversion, there is a potentially

substantial cost-savings initiative involved. In a recent feature for

The New York Times Gov. Arnold Schwarzenegger is quoted as

predicting that providing “free” e-textbooks could mean savings

in the millions on an annual basis. In the same article, William

M. Habermehl, the Orange County superintendent, predicted

that within five years the majority of students will be using digital


A big story in 2008 was Amazon’s Kindle, which appeared to be

on the verge of becoming the de-facto brand for e-books, just as

iPods are now pretty well synonymous with MP3 players. The

competition, watching (and studying) Amazon’s success, has begun

to serve notice that Kindle will not automatically own the market.

Sony’s e-book solution, the Sony Reader, has a competitive price

point and has emerged as a viable alternative. Other models in

various stages of development, such as Samsung’s Papyrus, and

offerings from iRex Technologies, Astak and Acer, will make a

concerted play in this space during the months and years ahead.



Perhaps the largest threat to Amazon and Sony is going to come

from Barnes & Noble. In July, the company launched an online

bookstore that includes a new application for mobile devices and

PCs. The cost to download new releases was initially set at $9.99

– the same amount Amazon charges. Barnes & Noble has recently

announced a new partnership with Plastic Logic, which is creating

an e-reader set to debut in early 2010. This news has industry

analysts buzzing with anticipation, and some are expecting

Plastic Logic’s larger (10.7-inch) screen to entice consumers

accustomed to bigger viewing areas for media such as magazines

and newspapers. Sony, for its part, continues to work with Google,

which has over one million books already digitized. Right now the

Sony menu, with over 500,000 titles, is more than double Amazon’s

available catalog – though disputes rage concerning the quality

of the text and even the accuracy with which the numbers are

accounted. The competition is certain to heat up as each company

strives to position itself as the preferred brand for e-readers.

In much the same way musicians have found ways to benefit from

streaming their music (for free and/or via iTunes or a service),

authors have a considerable stake in the e-books sweepstakes.

For obvious reasons, the prospect of cheaper and easier access

to content is a strategy writers can endorse, and it represents

another step toward democratized dissemination of material, while

operating within the profit-driven imperatives of the free market.

“People are always hungry for content,” Kleinman explains. “And

the print newspaper or print book increasingly seems like an

antiquated means of delivering that content. Interestingly, Amazon

has repeatedly stated that the older demographic has been utilizing

Kindle specifically because of its ability to expand font-size. Books

are not dead, but the definition of the “book” is changing.”

inherent advantage of being both cheaper and more accessible.

Our contemporary market illustrates a continuum of where

content has been and where it is going. The industry’s resistance to

change is now history: adaptation is the new business acumen, and

those who hope to succeed will embrace these inexorable trends. It

is indeed a positive signal of the times that a robust and forwardlooking

industry is now driving these changes, and remains

dedicated to providing innovative ways to deliver the content

consumers crave. ■


CEA recently surveyed consumer attitudes toward content,

incorporating their preferences and predictions. The results

were unsurprisingly indicative of the myriad choices, habits and

possibilities associated with the accessibility of content. As one

might expect, the younger demographic has been quicker to

embrace new media. Of the respondents aged 18-24, 67 percent

have shared photos through social networking sites, and 21 percent

have read an electronic book; by comparison, the percentages

for those aged 55 and older are 29 percent and five percent,


Looking at the past 12 months, almost half the respondents have

shared photos through a social networking site and 40 percent

read an electronic copy of the newspaper. The numbers decline

with newer technologies, such as reading e-books or accessing the

Internet via cell phones (13 percent and two percent, respectively).

According to the same study, a larger percentage of females prefer

to read printed copies of books and newspapers. Income appears

to have a significant correlation with those adapting quickly to new

technology: respondents with an income above $75k are more than

twice as likely as those making $25k or less to read news online.

When asked to consider how they anticipate using new media two

years from now, the older demographic appears poised to adapt in

greater numbers than today, while the younger demographic will

only expand its present enthusiasm. For instance, 56 percent of

respondents aged 25-34 indicated that they are somewhat to much

more likely to view movies online, as opposed to 17 percent of

those aged 55 and older.



Perhaps the most succinct way to summarize the evolution of

content is by acknowledging the positive implications for the CE

industry. Thinking about the iPod, or Kindle, or a smartphone,

these are not merely applications, but actual products. The fact

that these advancements incorporate hardware has enabled

certain segments within the industry to flourish, and the ongoing

imitation of contemporary success stories constitutes viable

opportunity during challenging economic times. With new

announcements coming almost every day, it’s difficult to predict

what might be just around the corner a week, or three months

from now.

In sum, income and age will remain the most significant factors

dictating the adoption of new content. While the younger and

higher income demographics indicate a greater likelihood of

utilizing all available technologies, evolving content has the



Imagine you have gone on vacation with your family to a

far-away tropical paradise. Like many travelers, you arrange

to secure your house while you are gone. Mail and newspaper

deliveries are put on hold, lights are set to timers and the security

system is activated. Having arrived at your destination and sitting

in your hotel, you have a sinking feeling that you have left a

window in your first floor rec room unlocked. What do you do

Some of us might call a neighbor while others would just hope

that the window was locked. What if you could check your home

security system on your laptop or smartphone and secure it This

is just one example of the advantages the connected home has to


programmed to raise or lower window shades to further control

home temperature. Smart energy monitors add another layer of

control, tracking total home energy usage and manipulating the

power used by appliances (such as air conditioning and hot water

heaters) so they are less likely to run during peak energy usage

times. Sprinkler systems also can be programmed to activate

based on time of day or recent weather patterns. The goal of this

interconnected Web of actions is a smarter, more energy efficient

home tailored to the homeowner’s lifestyle.


The connected home can be organized into two areas. The first,

home automation, puts household systems such as climate control,

lighting and security online allowing for control inside the home

(via a remote control or touchscreen) or away from home using

a computer or wireless device. The connectivity commonly

utilized by entertainment and computing devices on a home

network (such as the Wi-Fi connection between a PC and the

Internet) represents a second form of connectivity. The ease and

convenience of operating devices and systems wirelessly as well as

their online connectivity is the common thread.



The home automation industry in its present-day form originated

from home security and alarm systems. These products allow

homeowners to secure entry ways, provide a direct link to police or

emergency services and manage other devices such as the climate

control system. Say a homeowner leaves the house and sets the

alarm. This system, knowing the occupant has left, can tell the

lighting control system to activate preset timers while also lowering

the thermostat to conserve energy. The act of leaving or arriving

puts into motion a set of reactions designed to both secure the

home and conserve energy depending on the owners’ presence

(or absence) in the house.

Fast forward to today. Home automation has evolved into an

ecosystem of security and utility management, giving homeowners

the ultimate level of convenience and control. Consumers now

can control all of the home’s systems while at home or away. The

result is smart, real-time control over virtually every mechanism

in the house. Security systems tracking occupant entry and exit

can link to climate control settings which change based on who is

home and the time of day. These systems can then activate sensors


A growing number of consumers are familiar with a mobile

Internet experience. For instance, 27 percent of U.S. households

own a smartphone while 52 percent own a laptop computer

according to CEA’s 11th Annual Household Ownership and Market

Potential study. Because of this shift, manufacturers have begun to

make online monitoring and control of home systems a standard

option. Offerings such as WL3 from HAI and Composer Home

Edition from Control4 - products designed to link users to their

home systems (allowing homeowners to alter lighting, raise or

lower thermostats, or monitor entries and exits recorded by the

security system) via the Internet, tap in to this demand. Forty three

percent of adults said they would pay more for devices (such as

home security systems) that allow for wireless home monitoring

according to a recent CEA study. This indicates that these

capabilities appeal to a sizeable portion of consumers.

Many of us constantly check our e-mail, Facebook pages, and

bank accounts online. Are homeowners likely to monitor their

home systems with the same attentiveness While it’s unlikely that

Web based home controls will approach the user engagement

level enjoyed by Facebook, the desire to conserve energy, secure

one’s home, or control other household appliances (such as

entertainment systems) could increase their popularity and give

rise to new capabilities. Imagine how GPS-enabled mobile devices

might enhance home automation (for instance, a smartphone

home control application, sensing the user is on a particular route

and heading home, could automatically adjust thermostat and

lighting settings based on the user’s location).


According to Thomas Pickral, head of business development

at Home Automation Inc. (HAI), whole home automation

solutions primarily appeal to affluent homeowners “as a sensible

way for owners of large homes to manage lighting, security, and

entertainment systems”. While having the latest and greatest system

or gadget may be a strong purchase motivator, the goal for many of

these consumers is more practical. How can the owner of a home

6,000 square feet or more in size ensure that every window is shut

and every light switch turned off The practical aspect of these

systems is the sell for home builders and retro fit installers.

Lower cost individual automation solutions as opposed to whole

home systems, may open up these products to a broader market

of consumers (including owners of smaller homes). For some,

automation can begin as a singular solution to a problem. Perhaps

you would like the living room lights to turn on when you pull

into the garage, requiring that the lighting system be connected

to your garage door. Once this connection has been established,

adding other systems to the network, such as the thermostat or hot

water heater, is simple. The solution to one problem in the home

can act as an anchor allowing more devices to connect with one


“ I really see the future of the industry in

industrial acceptance on the part of utility

companies and local governments.”

Thomas Pickral, head of business development at Home Automation Inc. (HAI)

Energy, according to Pickral, is the future of home automation.

“I really see the future of the industry in industrial acceptance

on the part of utility companies and local governments.” As

demands on power companies and the grid continue to increase,

management, as opposed to increased supply is the key. In an effort

to cope with the rising costs of energy, we could see government

and utility companies working together to make smart meters and

load modules standard in all households.

The concept is straightforward. Utility companies could furnish

consumers with smart energy meters and load control modules

designed to encourage the use of appliances (such as air

conditioners or clothes washers) during off-peak periods with

the goal of eliminating the need to produce supplemental energy.

For instance, a homeowner might typically run the dishwasher

at 5 p.m. when demand is highest. Turning on this appliance

could activate a notification system telling the consumer that use

during peak periods will result in higher energy costs. If the user

is open to running the dishwasher at another time, less strain

could be felt by the power grid diminishing the need to purchase

extra electricity. Another option might be to disable appliances

like dishwashers, air conditioners and electric ovens during peak

periods requiring consumers to override them should they need

to be used.

Multiply this action across several appliances in one home

and thousands of homes in a locality and a noticeable change

in consumption could be seen. The hope is that an initial

investment in smart meters and load control sensors on the part

of governments could eventually reduce use during peak periods

– possibly eliminating the need for new power plant construction.

Altering usage habits instead of increasing supply is the goal.



While the concept behind the wireless connection of

entertainment devices and home automation systems is the same,

the capabilities operate via different transmission methods. Most

consumers are familiar with home Wi-Fi connections operating

via the IEEE 802.11 standard. These connections are designed to

transmit large amounts of data (up to 54 Mbps) between game

consoles, MP3 players, personal computers and the Internet.

Installed home automation systems use a similar but more

specialized connection method (like ZigBee or Z-Wave)

intended to transmit less data (up to 250 Kbps) and operate using

less energy.

Is there potential for an out-of-the-box solution that allows

consumers to utilize their existing home networks for home

automation One savvy homeowner, Matt Morey, created his

own connected home system using an IOBridge IO-204. This

device connects products in the home to the Web allowing them

to be monitored and controlled through an online interface. An

engineer at Texas Instruments, Morey linked the thermostat and

lighting controls in his home office to his Twitter account using

IOBridge’s module and a standard home network connection. This

set-up allows him to both observe the conditions in his office and

change the settings by simply “tweeting” a command. While the

practicality of using a social network to control ones home systems

and devices is debatable, this convergence of social networking

and home automation raises some intriguing possibilities. As more

homes adopt wireless connectivity can the remote interface be

customized (like Morey has done with his Twitter account) Or

will the user’s experience only be defined by his choice of installer

or equipment


Source: IOBridge


Will products emerge that allow non-engineering majors to selfinstall

their own home automation systems As more consumers

gain access to broadband and issues such as energy use rise in

importance, CE manufacturers may be able to capitalize on a

robust home do-it-yourself market looking for this type of control.

The utilization of existing home network connections will be the

key to widespread adoption.


Many users of digital media have faced a similar dilemma. While

content is purchased or downloaded onto one device (for instance,

an MP3/digital media player or a PC) they may want to enjoy it on

another device (perhaps a display somewhere in the home). While

this can certainly be accomplished, the process of transferring

files or reconfiguring PC connections often can be laborious and

require file formats be compatible with different media players or

operating systems.

As consumers transition to a digital lifestyle where music, video

and other files live not on physical media, but exist on hard

drives and on the Internet, how will the CE products they use

facilitate access across different environments For consumers and

manufacturers, the Digital Living Network Alliance (DLNA)

is working to provide a standard. DLNA-certified products ensure

digital content (photos, video, and music) is playable

Consumers will benefit through a more seamless experience and

increased choice in where and how they access their media and

other files.

A growing number of DLNA-certified products (including

displays, A/V receivers and digital cameras) will reach the

worldwide market in the next few years. In fact, ABI Research

projects shipments of DLNA products to reach 300 million


across products sharing a wired or wireless network connection.

Standardization of this type is a win-win for consumers and

CE companies alike. Manufacturers will improve the versatility

of their products, enabling them to work in concert with other

devices (which will increase the marketability of their offerings).

units by 2012. As this Web of device compatibility grows

(Windows 7 will reportedly support DLNA products, opening the

standardization to a huge market of consumers using a wide array

of devices) this certification could become a purchase motivator

for consumers.

Beyond home automation and entertainment, other opportunities

exist for connectivity in homes. While the market awaits the wired

refrigerator with RFID-enabled inventory abilities (will these

products ever come to market), other devices hold promise.

Demand for more wireless products in the home exists. A recent

CEA study found that 42 percent of consumers look for wireless

connectivity when shopping for electronics and 35 percent believe

a wireless connection in the home should be utilized by more

appliances and devices. Certainly there is space in the lives of

consumers to bring more products online.

One area ripe for opportunity is home healthcare. The proposed

overhaul of the nation’s healthcare system combined with

the growing medical needs of seniors presents a substantial

opportunity for medical device manufacturers. Products designed

to monitor health diagnostics (such as Tunstall’s Telehealth

Monitor) or manage pain and medication levels could be brought

online in the home allowing doctors and other health practitioners

to monitor patients from a central location. This would save

patients from having to make trips to the doctor for routine

check-ups, possibly improving quality of life. According to Parks

Associates, the wireless home healthcare market is expected to

grow to $4.4 billion by 2013, a 180 percent increase over its

current level.

to consumers looking for a more simplified way of obtaining or

sending information to the Internet.


What is the future of the connected home Home automation

capabilities (such as remote home security and energy monitoring)

may appeal to consumers; however, perceptions regarding high

costs and the installation process may inhibit adoption. The

struggle for manufacturers, installers and dealers is making these

systems more accessible to a broad market. Increased marketing

and education efforts (for instance, communicating the energy

conservation advantages and the overall convenience these systems

provide) as well as involvement from government and utility

companies is likely to spur adoption.

Opportunities exist, as well, for consumers to implement their own

systems, though the purchasing and installation processes needs to

be simplified. For other CE in the home, standardization is the key.

Consumers continue to embrace digital living and have transferred

some activities from one platform to another (for example online

video consumption moving from the PC to the flagship display).

Manufacturers able to respond to these juxtapositions of content

and environment, offering a more streamlined experience, stand

to benefit. ■


Firms such as Violet and Alcatel-Lucent’s Touchatag continue to

explore “The Internet of Things” - the connection of real world

objects to the Web. The Touchatag is an RFID (radio frequency

identification) reader which starts applications or opens Web

pages when a pre-programmed RFID-tag is swiped (for example

the Touchatag will open and play a child’s favorite movie when a

tagged action figure from the movie is swiped). The Nabaztag,

a device that converts e-mail, RSS feeds, and other information

from the Web into audio that is read to the user, also utilizes RFID.

A tagged umbrella, for example, could activate a weather report

when swiped on the Nabaztag.



Source: Touchatag

The goal of these products is to give ordinary items in the home

a degree of online relevance. The RFID-tagged umbrella becomes

both an instrument to keep the user dry as well as a tool for finding

out weather conditions. While the market for these products is

nascent (it is easy enough to check weather using a computer or

wireless mobile device) this type of technology could be useful



The television is changing. The transformations underway

will alter the TV experience as we now know it. Changes

over the last 50 years – from black and white to color

and from analog to digital – haven’t fundamentally changed

how individuals consume television content. But the television

of tomorrow – and tomorrow is near – will impact not just the

individuals who consume television media, but any and everyone

along the value chain – from content producers, distributors, and


While many issues remain the same – what content will individuals

consume on their TVs, who will control this content, how will

consumer find and gravitate to new content offerings, and what

will bring about the greatest monetization of this content – the

new world of TV is significantly more complex. Worlds are

colliding – and it is happening on the television.



The television is far and away the most ubiquitous of all consumer

technology products and, by some accounts, one of the most

ubiquitous of any product category. In the U.S., roughly 98 percent

of households own at least one television set with the average

household owning 2.7 television sets. Nearly a quarter of U.S.

households own more than four working television sets. Very few

technology products have seen consumer uptake in excess of 90

percent and yet the television has held this mark for decades.

There are very few things that 98 percent of U.S. households have

in common. An active television in each home allows these diverse

and distinct households to partake in shared cultural experiences

– from watching men walk on the moon to discovering the details

of one of America’s greatest tragedies on the days following

September 11, 2001. From the “tale of a fateful trip, that started

from [a] tropic port” to American Idol, the television is the single

apparatus that bonds some 114 million households by molding

popular culture and defining the news of the day.

The television is moving beyond HD. But the following is not

about the television getting thinner (it is), faster (it is), bigger

(it is), brighter (it is), or better (it is). These improvements are

a given. The changes coming to the television and carrying us

beyond HD are more than form factor alterations and spec

improvements. The stage is set for developments like 3D, Webenabled

TV and interactive TV to radically change tomorrow’s

TV experience.

These changes are not new. 3D has existed in some form for

many decades. Similarly, interactive TV has been talked about

for decades by executives in the hardware, software and content

industries. However, television viewing and Internet usage has

matured to the point that developments like Web-enabled TV can

finally prosper as home technologies.




The first high-definition television (HDTV) set was sold at retail

in 1998 and today roughly 60 percent of U.S. households own an

HDTV. The uptake of HDTV among U.S. households has been

phenomenal. Through the initial launch of HD and the pursuant

decade to maturity, consumers have gained a great affection for

HD content at the same time realizing two important issues: first,

consumers generally prefer to watch HD over standard-definition

video content and second, consumers do not require all of their

content to be in HD. While seemingly contradictory, these two

















Cable service only 56 53 57 55 51

Cable service and antenna 3 2 2 4 2

Satellite service only 19 22 21 23 24

Satellite service and antenna 3 2 3 5 3

Cable service and satellite service 2 3 2 2 3

Fiber to the home service only NA NA NA NA 3

Cable service, satellite service

and antenna

0 0 1 0 1

Antenna-only 13 12 11 9 9

Not connected to anything 3 4 3 NA NA

Don’t know 0 1 1 1 2

Total cable 61 58 62 61 59

Total satellite 24 28 27 31 31

Total fiber to the home NA NA NA NA 6

Total cable, satellite or fiber NA NA NA NA 89

Total antenna 20 17 18 19 14

Source: CEA, DTV Transition Impact, August 2009.

factors will help set the stage for Web-enabled TV and 3D content

to find traction within the marketplace.

Paid television services also play an important role in how

television has emerged as well as what developments the next

decade will bring. Today, only nine percent of U.S. households rely

solely on over-the-air broadcast television. This figure has fallen

precipitously over the last four years as the U.S. has finalized its

transition to digital television.

Moreover, the number of households with any television sets

relying on over-the-air broadcasting has declined rapidly. Prior

to the end of analog television broadcasting in the U.S., roughly

one-in-five households had at least one television receiving overthe-air

television. Following the transition, this figure dropped to

one-in-seven households. This data suggests households refrained

from upgrading secondary and tertiary analog television sets and

have become more reliant on paid television services.

In nearly 60 years of history, 1994 is the only year that spending on

paid TV services declined. During this 60-year period consumer

spending on paid TV services increased at a compound annual

growth rate of nearly 16 percent. Even in the most recent

economic downturn, consumers increased their spending on paid

TV services. Today, the average household spends roughly $700

annually on paid services.

Paying for Internet access is also common for U.S. households.

Today, roughly 75 percent of U.S. households have Internet access.

With broadband in 67 percent of U.S. households, it has clearly

become the preferred option for home Internet connectivity.

Since tracking spending on Internet access first began in 1987,

U.S. households have increased spending by a compound annual

growth rate of over 40 percent. As the future of TV goes beyond

HD, it will build off consumers’ willingness to pay for TV service

and Internet connectivity.



U.S. TV networks have flirted with 3D for years.

January 1989

NBC’s Super Bowl halftime show dubbed ‘Bebop

Bamboozled,’ included 3D effects that required special glasses

to see effects including Frisbees that appeared to come out

of the screen. “It was a little like watching a football halftime

show in the distorted reflection of an old mirror,”

the Associated Press said at the time.

May 1994

Fox aired a 3D episode of “Married with Children.”

May 1997

For one week, ABC put nine of its shows in 3D, including

“Home Improvement,” “Coach,” “Spin City,” “Family Matters”

and “America’s Funniest Home Videos.” A few weeks later,

NBC aired a 3D season finale of “Third Rock from the Sun.”

August 2000

Discovery Channel kicked off its annual Shark Week with

“Sharks 3D,” which included scenes where sharks appear to

jump off the screen. “More like 2.5D,” wrote Washington Post

TV critic Tom Shales.

November 2005

NBC aired a 3D episode of psychic drama “Medium.”

February 2009

NBC showed a 3D episode of dork-turned-spy series “Chuck,”

the day after the Super Bowl. It included two 3D


Source: Sam Schechner (August 17, 2009 WSJ “After Conquering the Movies, 3D Viewing

Makes Its Way toward Home TVs”).




3D video has a long – and certainly cyclical – history leaving a

lasting imprint on the heritage of entertainment video. The stereo

film camera was introduced around 1890 and by 1915 the first

red and blue “anaglyph” movies were being shown to general

audiences. Forty-five years later 3D would experience the first of

many rejuvenations when in 1953 –some 45 films were released in

3D. In the 1970s, 3D returned for a short period in the theaters

with films like Jaws 3D and then again in the late 1990s, 3D began

to reemerge in IMAX films. Beginning in 2004 several major films

were released in 3D opening the door for another revitalization.

In all of these cycles, 3D never made significant inroads into the

home. Until now, it has largely been a video experience reserved

for the cinema. But building on consumers’ experience with HD

in the home and after major technological improvements, 3D is

becoming a viable home entertainment technology.

While the uptake of 3D will likely not match the unprecedented

success of HDTV – there are notable similarities that suggest 3D

can develop into a viable market opportunity. First, experiencing

3D content impacts and influences how consumers think about 3D

content. Consumers who have seen a 3D movie in the theater show

greater preference for 3D content over 2D content and are more

interested in watching 3D video content in their homes. They also

tend to believe 3D makes for a more enjoyable movie experience

and they are interested in learning more about 3D.

Second, while consumers show interest in an assortment of

content areas, the genres that receive the greatest attention when

it comes to 3D are the same ones that gained the most traction in

the early days of HDTV. When choosing only one genre, 22 percent

of consumers indicate action and adventure movies in 3D would

most influence their decision ro buy a 3D – capable TV. Second

on the list was sports programming with 17 percent of U.S. adults

followed by nature or wildlife shows which was selected by 14

percent of consumers.

An important catalyst in making 3D a viable home entertainment

technology is the emergence of industry-wide standards.

Today, there are a multitude of groups beginning the standards

discussions and ascertaining where standards are most pivotal.

There are a plethora of pieces that need to come together – from

the mastering of 3D content to distribution to hardware and

eyewear used to display 3D content. While content is increasingly

distributed into the home through downloadable or streaming

digital formats, physical media remains the primary method for

moving content into and around the home. While there is still

no agreement on a standard format for broadcasting or storing

3D movies and TV shows on Blu-ray disk, Blu-ray is arguably the

format which can enable a home 3D market the soonest.

Similar to HD – there are certain hardware requirements needed

to enable 3D viewing – namely a 3D capable television and special

viewing glasses. Having to wear special glasses to view 3D video

does impact both the desire to watch 3D video content as well as

consumers’ willingness to purchase a 3D-capable TV set, but as

consumers experience 3D video content first-hand, the impact of

this hindrance is muted. Approximately 44 percent of U.S. adults

say having to wear special glasses has no impact on their desire to

watch 3D video while 49 percent of those who have actually seen

a 3D movie in the last year say needing to wear special glasses has

no impact.

These results are consistent with consumers’ willingness to

purchase a 3D-capable television set. Forty-nine percent of those

who have not seen a 3D movie in the last year say having to wear

glasses makes them less likely to buy a 3D TV. However, only 40

percent of those who have seen a 3D movie in the last year say it

makes them less likely.

Gaming is also well positioned to advance 3D in the home.

The recently-formed S-3D Gaming Alliance is an advocacy and


Society of Motion Picture and Television

Engineers (SMPTE)

Professional association for motion picture

and television engineers.

Standards in the production and mastering

of 3D video content so 3D video content can

be mastered and produced in one format,

but distributed across diverse distribution


3D@Home Consortium

Consortium formed in 2008 with the

mission to speed the commercialization of

3D into homes worldwide.

Identify needs for standards and other market


Society of Cable Telecommunications

Engineers (SCTE)

Professional association for the

telecommunications industry.

Standards for transmitting 3D in cable TV


Consumer Electronics Association (CEA) ®

Professional association representing some

2,000 technology companies.

Standards for consumer electronics hardware

related to 3D. CEA is currently working

on standards for 3D eyewear to enable the

required accessory market.

Advanced Television Systems Committee


Group that developed the ATSC Standards

for digital TV.

Discussing how 3D impacts existing standards

Entertainment Technology Center @USC


Professional association of major studios,

technology, and service companies.

Working group projects to address 3D issues

Blu-ray Disc Association

Group of companies dedicated to

developing and promoting the Blu-ray Disc


3D Task Force to add advanced 3D

specifications to the format standard

High Definition Multimedia Interface


Group of companies dedicated to

developing and promoting the HDMI


3D Task Force to add advanced 3D

specifications to the format standard


industry standards group aimed at growing the stereoscopic 3D

gaming industry. HD gaming has helped to advance HD within the

home and it is natural to assume gaming can do the same for 3D.

Consumers increasingly are willing to pay a premium for

3D-capable television sets. Nearly half of consumers indicate they

would spend more to have a television set capable of displaying 3D

content. Fifteen percent indicate they would spend an estimated

25 percent or more on a television set capable of viewing 3D

content. Consumers who have experienced 3D first-hand are more

interested in the technology and in this instance more willing to

spend for the technology. For those who have seen a 3D movie

in the last year, 60 percent are willing to spend more on a 3D

television for their home and 19 percent are willing to spend up to

25 percent more.

While still in its infancy, a viable consumer market is emerging

for 3D. All told, more than 26 million households are interested

in a 3D content experience in their home representing nearly 23

percent of all U.S. households.

2010 will represent a watershed year for 3D on many fronts. While

there are a limited number of 3D-capable television sets available

in the marketplace, nearly every major television manufacturer is

expending considerable resources to launch their own 3D capable

offerings in 2010.

While manufacturers are bringing to market their 3D-capable TV

offerings there are also significant developments brewing with

respect to the availability of content. For example, in the United

Kingdom, satellite TV provider Sky TV is preparing to launch a 3D

television channel in 2010. Sky also plans to produce and deliver

unique, exclusive 3D video content on the channel. In the U.S.,

satellite TV provider DirecTV and cable network owner Discovery

Communications, among others, are examining different options

for delivering 3D video content into the home. It is likely that the

U.S. will also see the first channel focused on 3D video content

appearing in 2010.

Consumers are showing increased interest in 3D content at the

same time manufacturers are bringing a slew of new offerings to

market. Content owners have shown their willingness to produce

3D content for the big screen and are looking for adjacent markets

wherein to monetize these offerings. At the same time, paid TV

operators are looking at ways to bring 3D into the home. If 3D will

make inroads into the home – and survive as a home technology –

now is the time.



The notion of convergence between any two devices – but

especially the television and the computer – has received

continuous attention. Despite more than 300 million television

increasingly consumers will be able to upload video content like

home movies.

The bulk of Internet-enabled TV today revolves around video.

CEA data finds nearly two-in-five online adults (38 percent)

have watched a streaming video in the past year and this trend is

growing. Recent data from Ipsos MediaCT found twice as many

online Americans streaming online television through services like

Hulu than in 2008. Broader adoption of the Web-connected TV

will drive new business models and perhaps enable offerings like

viewing first-run movies within one’s home a main-stream reality.


sets in use within U.S. homes, 84 percent of U.S. households with

at least one computer, and three-in-four U.S. households having

Internet access – the union of the television and the computer has

been slower than anticipated. In fact, according to a CEA’s recently

published study, Net-Enabled Video – Early Adopters Only, over a

quarter of U.S. households’ still have no interest in accessing the

Internet through their television set. Still, there are subtle shifts in

consumer behavior and industry developments that are moving us

closer to a world where the Web will be more fully integrated into

our television experience.

Despite having 2.7 television sets in operation in each household,

the living room is still the primary viewing room for television

with 92 percent of households watching TV there. The bedroom,

used by 60 percent of U.S. household, is a distant second. The

living room is also the most Web-connected room within the

home behind the home office, with 57 percent of households

reporting an Internet connection (wired or wireless) in the living

room. The heart of the home remains the key arena for action

between the television and the Internet.

There are two ways Internet connectivity will likely come to the

TV. It will either be embedded in the television itself or it will

come through adjacent and connected products like set-top boxes

and game consoles. Today, the latter is the primary vehicle to

bring Internet connectivity to the TV. For example, 14 percent

of households report having their TV connected to the Internet

via a game console. But Ethernet-enabled TVs are growing. CEA

estimates that by the year 2013, nearly 60 percent of all TVs sold in

the U.S. will be Ethernet-enabled.

From an industry perspective, a move toward Web-enabled TVs

is a logical next step in a market where prices are declining and a

change in mix will no longer offset these deflationary pressures.

Web-enabled TV is emerging in a very organic way. Companies

like Verizon and DirecTV are experimenting with services to

enable subscribers to access Internet-housed content and offerings

like Twitter, Facebook or the photo-sharing site Flickr. Thus far, the

focus has been to allow consumers to download video content, but



The definition of interactive TV is unsettled. Many of the features

and services referred to as interactive TV a decade ago now go

by another name. Features and services like video-on-demand

(VOD), interactive programming guides (IPG), and digital video

recorders (DVR) have found enough success in the market that

they go by their own name and fall outside the definition of

interactive TV as it is discussed today. At its root, interactive TV

is about two-way communication from broadcaster to viewer

and viewer to broadcaster where the viewer has the ability to

manipulate in some way how they consume the programming


Ultimately, interactive TV is allowing the viewer to influence

the content in real-time. This can happen on many fronts from

pausing content to voting during the program.

Data from Nielsen’s Convergence Panel provides an interesting

glimpse into how households are currently creating their own

interactive television experience. For example, during the 2009

Oscars roughly one-in-ten viewers were also simultaneously logged

onto the Internet. The most visited site was Facebook, where

the average visitor spent 76 minutes. Individuals concurrently

watching the Oscars and using Facebook tended to watch roughly

50 percent more of the program than the average viewer. Nielsen

estimates there were 100,000 Twitter messages about the Oscars

sent during the broadcast –roughly one every seven seconds.

Recently Fox announced that they would insert a continuous

Twitter feed on the lower-third of the screen during the live

broadcast –for the first time making the Twitter conversation

a direct part of the TV experience. This type of convergence is

creating an interesting rift between the time-shifting interactive

TV experience we recognize now and the desire for a real-time

community experience that goes beyond the living room.

CEA data finds almost a third of online adults (30 percent) always

or usually surf the Internet while watching TV and another third

(32 percent) sometimes do so. The simultaneous use of Internet

connections and broadcast television raises important questions

about how consumers are currently using these two services

simultaneously and how they might use them in the future.

Current usage suggests consumers will not be satisfied with a

single device, but might prefer separated devices so that they can

continue to multitask as they do now.

While much of the discussion around interactive TV has focused

on the ability to toggle in real-time between activities like

watching, shopping and researching – there is the potential for

interactive TV to emerge with a delay – similar to a memo or

reminder. For example, rather than buying the jeans one sees in

a movie, one could tag the jeans and come back to them later.

This has the potential to in turn create a tremendous amount

of information which will influence advertising and the buying

experience – think “most popular tags from Thursday Night TV.”

Imagine never having to look for just the right present for a

loved one again – simply check what that persons tags. Viewers

will either come back to this information later (pull) or will have

relevant information sent to them (push). For example, we might

have coupons sent to us, or have retail establishments contact us

with deals for tagged items. According to Scarborough Research

– roughly 7.5 percent of U.S. households (8.6 million) receive

coupons via text messages or e-mail. These households tend to

be younger (14 percent more likely between 18 and 24) and have

more years of education (51 percent more likely to have a college

or graduate degree). These are the exact audiences most likely to

take advantage of interactive TV.

Ultimately, an Interactive TV experience that blends media

consumption with other activities like shopping will need to

balance the passive way in which most consume media content

with the desire to create additional actions for the consumer.


Paid TV services took content dissemination from a dozen or so

simultaneous options to hundreds of content offerings. Early

successes like video-on-demand have added dozens more. But an

Internet connection opens a world of millions of simultaneous

content offerings. Perhaps most importantly, previous content

offerings were structured and organized. Internet content is

decidedly less organized – making content parsing the biggest

hurdle to Interactive TV.

An insurmountable amount of choice will play an important role

in defining how the next-generation of TV emerges. For example,

the U.S. Tennis Association (USTA) recently announced that

during the U.S. Open it will offer live online streaming of more

than 150 matches totaling some 300 hours of live video. As storage

costs decrease and bandwidth increases – content is exploding.

According to Cisco, by 2013, consumer IP traffic will be three times

that of enterprise IP traffic. Much of this will be driven by rich

media digital entertainment. Cisco estimates that by the year 2013,

it would take more than half a million years to watch all of the

online video that crosses the network on a monthly basis.

Cataloging content can become burdensome in a world with

unlimited choice. A plethora of partnerships have been formed in

the last year to provide preferred positioning to specific content.

This shouldn’t be surprising. Hoping to stand out in a world with

nearly 700 exabytes of information is akin to keeping a glass of

water secure in front of a breaking dam.

Content discovery is an important key in a world with unlimited

information, but relevancy will drive the day. The battle for

consumer attention will be colossal. The role of search, user

profile, and the aggregation of dispersed information will define

how net-enabled TV emerges.

In order to thrive, Interactive TV must take advantage of this

dispersed information. From user reviews to linked profiles and

preferences within social networking communities, the most

promising interactive TV offerings will include degrees of artificial

intelligence. For example, while packing for a flight one might turn

to the Weather Channel hoping to catch the latest weather report

for an upcoming trip location. Right now, the television doesn’t

know the precise content the viewer is looking for so the viewer

must watch irrelevant content. But other available information

can help parse what is important because the viewer’s calendar

shows an upcoming trip and other relevant travel details like

hotel reservations and car service have been logged. Combining

dispersed pieces of information will allow the viewer access to the

most relevant content given a set of circumstances.



Content owners are facing an ever complex world. Today, they are

confronted with a world where they must decide between working

with existing partners to have their content offerings delivered

through the system or look to new partners to have their content

delivered over-the-top. Understanding what will win, lose, and

what might coexist become crucial issues for content owners

looking for distribution.


There is a huge potential for channel conflict –as content

owners experiment with different business models in the hope

of discovering which models attract consumer attention most

effectively. In a world with seemingly unlimited content choice,

content creators and advertisers must also decide what time

parameters make the most sense as they fight for consumer

attention. Does an evolving 30-second spot change the traditional

30-minute time frame

CEA is already active in 3D TV with a new working group and

task force dedicated to creating standards for 3D video in the

home. For information contact Alayne Bell at abell@CE.org.

Finally, controlling media flow in the new world of net-enabled

TV will play an ever increasing role. Will content flow be

controlled by traditional paid TV service providers, prominent

Internet properties like Google, or other players As content

choice escalates what role will sites like Facebook play in framing

relevancy and driving content discovery These are all questions

searching for answers in a TV world beyond HD.


Interesting and important struggles lay ahead for TV as we move

beyond HD. The natural contention between closed and open

environments will continue and perhaps even heighten. The rifts

between consumer demand for all-encompassing access and

industry motives to create a seamless experience and offer new

business models like first-run movies will continue to be sorted in

the marketplace.


We’ve learned from past technological revolutions that taking

existing content or experiences and moving them into new

markets in largely the same structure will not succeed. Taking a

brick and mortar concept and moving it to the web in a static way

did not find success in e-tailing circa the 1990s. Pushing Internet

content onto the mobile device for traditional “Web surfing” also

failed to provide consumers with the experience they were looking

for in a mobile Internet environment in the early 2000s. Going

beyond HD will require more than taking adjacent markets and

moving them onto the TV. The experience required will require

more originality. It will also require an ease-of-use that we find in

each successful technological revolution.

Finally, the struggle in deciding what TV will be or should be – will

be an on-going dialogue. These themes are currently infiltrating

the mobile device arena. For example, Apple recently rejected an

app application from Google Voice stating that the app would

“alter the iPhone’s distinctive user experience by replacing the

iPhone’s core mobile telephone functionality and Apple user

interface with its own user interface for telephone calls, text

messaging and voicemail.”

Inevitably, features and functions will emerge for the television

posing the possibility of altering the “distinctive user experience.”

Some of these features like live digital video recording have seized

the day and excelled in the marketplace. Many are yet to come. The

decade ahead will prove an important one for TV. ■



Three screens orbit the CE industry today: TVs, computer

displays and mobile phones; but a fourth screen, embedded

in the dashboards of tomorrow’s autos, will soon frame

the consumer CE experience. As with other screens, the enabling

technology for in-car displays is connectivity that is opening

a new market ripe with opportunities for manufacturers and

content producers alike. This is the promise of the connected car, a

platform poised to make a quantum leap in the next several years.

Up until now, connectivity in cars has been limited to proprietary,

application-specific services – and limited to only a handful of

vehicle makes/models. However, as Internet connectivity infuses

the vehicle platform, the car will finally be linked to other CE

engagement arenas such as home, work and on-the-go, adding

a new dimension to consumer electronics. This Five Technology

Trends to Watch section examines how the connected car has

evolved and explores what’s happening with the current crop of

connected vehicles. Are business models changing What market

issues and challenges confront the marketplace Are consumers

ready for the connected car



Some industry observers might say the connected car is an assured

eventuality – almost a certainty, given recent technology trends

and the connected nature of today’s society. However, numerous

consumer behaviors and business cases lie at the heart of the

connected car; collectively helping propel this concept into a

broader reality.

On the consumer side, awareness and demand for more robust

in-vehicle technology and connectivity has slowly grown thanks

in large part to connected devices brought into the car like cell

phones and portable navigation devices (PNDs) and embedded

services like GM’s OnStar or BMW Assist.

On the business side, the connected car delivers opportunity for

automotive and CE industries alike. Connectivity provides car

manufacturers a new competitive sales platform in addition to

fuel economy, safety features, etc. Services attached to connected

cars also can expand the recurring revenue opportunity for vehicle

OEMs beyond maintenance and service contracts. Connected

vehicles can deliver better performance metrics to guide product

development – which may help car makers design better vehicles.

And wireless ‘updates’ beamed to cars could save the industry

billions in costly recalls or other less urgent fixes.

For the CE industry, the connected car has the potential to

generate new sales opportunities which are not necessarily

limited to the OEM business. As consumer demand for vehicle



Source: Strategy Analytics, July 2009

connectivity grows, manufacturers will surely respond with

aftermarket solutions tailored to specific needs. Partnerships

will be key, which is where the content community comes in.

Connected cars park a new mobile audience in front of content

producers fostering greater engagement and generating new

advertising sales potential.

While multiple industries and businesses will benefit from new

opportunities resulting from connected cars, consumers are the

real beneficiaries of this innovation that will yield entirely new

CE experiences helping them stay safe, connected, informed and

entertained while on the road. A handful of embryonic connected

cars are available today but how long will it take to realize nearubiquitous

connectivity among new vehicles Is the connected car

on the fast-track or a long and bumpy road

Magney. “Hands-free voice communication is a typical first step,

which usually fosters demand for more connected services.”

The idea of Internet connectivity in the car is not top of mind

among most consumers today. Recent CEA consumer research

found just 17 percent of U.S. adults agree with the statement: I

am interested in accessing the Internet in my vehicle. Men are a

bit more enthusiastic at 21 percent, while women are less so (13

percent). Agreement is also higher among young adults and adults

from higher income brackets.

However, while overall U.S. adults appear to have a middling

interest in car connectivity, other more vehicle/technology-centric

consumer segments are more receptive to the connected car idea.

We’ll take a closer look at these consumers later, but given the

nascent nature of the connected car, its niche-oriented appeal is




Pinpointing the first iterations of the connected car depends on

how you define the term, but we can look to historical industry

trends marking the early days of this endeavor. The earliest

connected cars began to appear in the last decade; employing

proprietary, embedded systems referred to as ‘telematics’. These

systems were attached to services mostly geared toward safety and





Before we delve too deep into the earliest manifestations of the

connected car, it is important to understand the consumer market

fundamentals supporting this trend. For starters, Americans

love cars and we own a lot of them. A 2006 U.S. Department of

Transportation (DOT) study found there were more than 251

million registered vehicles in the country – a figure outnumbering

the tally of licensed drivers. And as a result of our crush on cars, we

spend several hours a week behind the wheel commuting to work,

running errands or making a road trip.

Americans also love technology. CEA estimates 90 percent of U.S.

households own a cell phone and nearly as many (84 percent)

have a PC at home. Thanks to technology we lead increasingly

connected lifestyles a la Wi-Fi, and busy ourselves e-mailing,

texting and chatting. We download and stream innumerable

gigabytes of rich media content for news and entertainment. And

we cyber-socialize via an armada of social media services like

Facebook and Twitter but are consumers really thinking of their

car when it comes to these pursuits

Phil Magney, vice president, Automotive Practice at iSuppli sumsup

the state of the consumer and connected car. “Consumers are

beginning to accept and embrace connectivity in the car,” says

GM’s OnStar provides a good example of this early approach

which is still relevant today. Fast-forward a few years to the middle

part of this decade and we see smartphones and connected PNDs

begin to enter the picture. This was the genesis of the device

approach to the connected car – another method employed today.

Through portable devices brought into the vehicle environment,

consumers could realize some semblance of car connectivity and

the seeds of future demand were planted. A couple of years ago,

Bluetooth-enabled smartphones were able to ‘pair’ with compatible

vehicles translating phone features to the car and enabling

hands-free calling. Consumers were gaining more experience with

connectivity in the car and the technology continued to evolve.


The connected car market today is still dominated by two

approaches, with some solutions connecting the vehicle through

embedded technology, while other systems utilize a device

(wireless phone) to connect the car.

Apart from the connection method, the connected car boils down

to two types of services: vehicle-centric and consumer-centric.

Vehicle-centric services revolve around the car itself, enabling

remote door unlock or notifying authorities after a collision.

Consumer applications are tailored to the driver or vehicle

occupants. Internet search capability and concierge services are

good examples of consumer applications. Eco-functions are a

relatively new set of consumer applications. For instance, Fiat’s

Eco: Drive collects and analyzes driving data and can make

recommendations on behavioral changes to minimize emissions

or maximize fuel economy.

Innovation is still evolving in the connected car. Some of the latest

approaches employ an open-source model and another creates a

mobile hot-spot. Next, we’ll examine some current market plays

and reveal the latest innovations in the quest for the connected car.


phone to facilitate the connection to the car. The device approach

mitigates monthly fees from dedicated connections, but has an

Achilles heel according to iSuppli’s Phil Magney.

“The inherent challenge with the device connection approach is

connected services are dependent on carrier coverage areas”, says

Magney. “And, of course, nothing happens if the phone is off or

damaged in a crash.”

What’s more, SYNC uses the phone’s voice channel, encoding

data packets ‘in-band’, which can be frustratingly slow. This is one

reason most industry observers agree the embedded connection

method will eventually win in the U.S. market, while in other more

price sensitive markets (e.g. BRIC countries) the device method

may be preferred to keep costs low.

Interestingly enough, while OnStar and SYNC are at opposite ends

of the connection/service continuum, these services are beginning

to move toward the middle in their design and offerings. Industry

sources expect GM’s OnStar to soon launch a suite of consumercentric

applications, while Ford’s SYNC may soon add embedded

connectivity to ensure a connection is always available for its ‘911

Assist’ emergency service.



Source: General Motors


GM’s OnStar service represents the embedded connectivity

approach using an array of telematics (e.g. on-board GPS receiver,

sensors, cell phone, etc.). As a result, OnStar’s services are more

vehicle-centric in nature. One advantage to the embedded

approach is the vehicle enjoys a dedicated, high bandwidth

(typically 3G) wireless connection to the service provider.

However, this comes at a cost – usually in the form of a monthly

subscription – which not all consumers are prepared to pay,

regardless of their financial situation.

The good news is pricing for most embedded connected car

services is fairly reasonable and most providers (like OnStar) have

different plans to suit individual needs. For example, OnStar offers

two plans: ‘Safe and Sound’ and ‘Directions and Connections’

priced annually at $199 and $299, respectively.

Ford Sync represents the device connection method and is a

consumer-centric service. In addition to voice activated media

player controls and hands-free calling, the SYNC service offers

a suite of connected driver applications using the driver’s cell

Source: Autonet Mobile



As the connected car continues to evolve, new products and

services are coming to market with new value propositions,

lending a new twist to the connected car.

One such product/service combination is Autonet Mobile, which

has built a network for cars that allows passengers with Wi-Fienabled

devices to connect to the Internet at broadband speeds.

Essentially, Autonet Mobile’s service turns your car into a roving

hot spot. The business model here requires customers to purchase

Autonet’s mobile Wi-Fi router for $399; and similar to the

‘embedded’ model a monthly service fee is required. Customers

can select from two monthly plans based on total monthly data

transfer volumes (1GB=$29/month or 5GB=$59/month).

The sales channel for Autonet Mobile is expanding rapidly both

in the aftermarket and through vehicle OEMs. This summer the

company signed distribution agreements with Amazon.com and

Volkswagen of America Inc. (marketed as uconnect Web). But

Autonet’s mobile hot-spot concept is not the only new service

competing for cars.

AutoLinQ, a next-generation infotainment and connectivity

solution’ developed by Continental Corp., takes a more traditional

‘fourth screen’ approach to the connected car concept, but brings

forward the first open-architecture platform. Introduced at the

Telematics Detroit 2009 Conference in early summer, AutoLinQ


Many business issues confront the connected car, creating

potential potholes on its path to profitability. One key issue facing

the connected car is control. Who controls the experience Vehicle

OEMs Wireless carriers Consumers However, expert

opinions vary.



Controlling the connected experience is a concern of vehicle OEMs

for obvious safety and liability reasons. Most industry observers

agree consumers expect car makers to support connected services

present in the vehicle. How then do we reconcile forthcoming,

open-source solutions like AutoLinQ The answer lies in what

iSuppli’s Magney refers to as ‘controlled openness,’ similar to

Apple’s App Store where content is vetted and approved before

hitting the virtual shelf.

promises to allow drivers and passengers the ability to personalize

content available in the car through embedded and downloaded

applications. As an open-source solution, Continental plans

to make AutoLinQ application tools available to third-party

developers and leverage the forthcoming Android Marketplace as

well as other online content procurement sources.

The company envisions AutoLinQ applications will deliver

content and services adapted for cars and relevant to the driving

experience. Based on information released by Continental, the

AutoLinQ solution should be a harmonious blend of vehicle and

consumer-centric features and functions. Yet, it remains uncertain

if vehicle OEMs will embrace an open-source connected car


The success of the Apple Apps Store demonstrates how CE

manufacturers, service providers and third-party developers can

work harmoniously together. The question remains, however, if

the vehicle environment shoulders a different set of business issues

compared to cell phones, PCs or game consoles.

However, some argue control of the connected car lies with

wireless carriers, whose wireless data networks carry the signals

and data packets support connected services. Will we eventually

see net neutrality issues crop-up as bandwidth becomes limited

Others believe consumers are ultimately in command, passing

judgment on connected services with their dollars.

Another key issue facing the industry is finding the right business

model. Many ideas have been tested, but none have risen to the

top as an infallible scheme. Joanne Blight is an automotive practice

director at U.K-based Strategy Analytics where the business of

the connected car is an omnipresent focus. Blight believes the

connected car business model needs some work, starting with

the basics.

“The biggest challenge is finding out where the opportunity is, and

then what consumers will pay for,” she says. “Vehicle OEMs are still

searching for the answer. Consumers appear less willing to pay for

safety and security services, but willing to open their wallets for

entertainment and information content.”

Blight believes the most challenging business models are where

companies act unilaterally. She says partnerships, typically between

vehicle OEMs and wireless carriers work best. “Wireless carriers

understand consumers when it comes to selling and delivering

services,” says Blight. “The OEMs know how to build and sell cars.

It’s all about core competencies.”

Monetization and cash flow also are core issues of the connected

car. In the decade ahead, expect car makers to employ different

payment options attached to service plans and options enabling

consumers to tailor their experiences to their lifestyle. Consumers

will be able to choose from several usage models including

subscriptions, ad-supported services and pay-per-use billing

options. We may see providers employ a combination of these

methods to maximize market potential.


The concept of the connected car may soon have another

meaning as electric vehicles take to the streets. Powerstations

could supplant gas stations along America’s highways. Will

drives soon say “plug’er in” instead of “fill’er up” To learn

more on electronic cars and the role they’ll play in the nation’s

new energy infrastructure, check out Chris Ely’s piece on

Smart Grid technology on page 25.

Another business issue more salient to the CE industry is

technology. That is, keeping up with technology and ensuring cars

do not become obsolete after only a few years. The technology

issue of the connected car is analogous to the PC or wireless

phones where upgrades or altogether new hardware may be

required to use new services. As more connected vehicles hit the

road, vehicle OEMs must work carefully with the CE industry to

design scalable systems that are easily upgraded or serviced. This

is also where the aftermarket comes in as a purveyor of solutions

(e.g. sensors, cameras, displays, connectivity) and a provider of

installation services.

Apart from business models and technology trends perhaps

the most important business issue of all is identifying the right

customer for connected services.


A November 2008 consumer research study by CEA identified

the core market for all in-vehicle technologies. These consumers,

labeled Technology Enthusiast Drivers, represent 44 percent of

the U.S. online adult population, which equates to 77 million

people. However, the research went further to drill down to four

main demographic segments within this group who have a special

affinity and purchase intentions for in-vehicle technology. These

segments include parents with young children, parents with teens,

young adults, and road warriors (business people who frequently

travel). This research points to a large potential market for

connected car services and gives a clue to which consumers might

be early adopters.

Overall, Technology Enthusiast Drivers are most interested in

entertainment and communication functions and less interested

in safety and security services. For example, half of all Technology

Enthusiast Drivers are interested in installed vehicle information

and communications systems and 35 percent are interested in

in-vehicle television. The research also shows some functions are

more niche-oriented. For example, road warriors and young adults

are most interested in computers in the car to access e-mail and




Some of the more ‘global’ and far-reaching efforts related to

the connected car involve development and support of Vehicle

Infrastructure Integration (VII) and Intelligent Transportation

Systems (ITS). The impetus behind these initiatives is two-fold.

The first goal is to promote safety by linking the car to information

sources and even other vehicles to reduce collisions. The second

goal is to ease traffic congestion, promoting greater productivity

and reducing fuel consumption.

The implementation of these concepts ranges from simple to

complex. VII implies everything from electronic toll booths to

connected roadside weather monitoring stations informing drivers

of changing weather conditions. Meanwhile, ITS refers to more



macro-level elements such as 511 call centers and traffic signal

controls that can be adjusted as traffic conditions dictate.

Key technologies helping make these concepts reality are, of

course, telematics and GPS but keep a watch on new

communication protocols like Dedicated Short Range

Communication (DSRC). This fairly new standard operates on the

5.9GHz band and provides connectivity within 1,000 meters of a

transmission source. DRSC is seen as the key to vehicle-to-vehicle

communication and connectivity to VII information sources.

In addition, expect to see higher bandwidth connections such as

WiMAX and Long Term Evolution (LTE) connectivity among

connected vehicles over the next several years. As vehicle and

consumer-centric applications and services advance, higher

bandwidth connections will be necessary to accommodate the

rising tide of data and communication information passed over

vehicle networks. Greater bandwidth will also be required for

the raft of entertainment content expected to appear on

in-vehicle screens.

What’s more, speedier connections that enable new business

models to flourish will ultimately foster radical changes in the

way vehicle OEMs interact with customers. For example, wireless

software updates to connected vehicles can deliver huge savings

to car makers in service expenses while boosting customer

satisfaction. Also insurance providers, armed with vehicle usage

data, may be able to offer drivers premiums based on how they

drive (e.g. acceleration, braking, speed, mileage and hours behind

the wheel).

“Connectivity will become necessary for OEMs to remain

competitive in the years to come,” says iSuppli’s Magney.

“Connectivity will be the key to managing and servicing the

vehicle and maintaining customer relationships.”



Connected cars and the fourth screen will positively impact not

only the CE experience, but the driving experience as well. Drivers

will no longer be isolated travelers, but part of an interactive,

connected system. In the coming decade, car connectivity will

usher in a new era of competition among CE manufacturers,

vehicle OEMs, service providers and content producers generating

near limitless entertainment, communication, safety and

security options for consumers. Both OEMs and the aftermarket

will benefit from opportunities stemming from fourth screen

applications and services. The real winners will be consumers,

whose love-affair with the car will only grow stronger and deeper

as the car finally connects to their digital lifestyles. ■



One doesn’t need to look far to see why there has been

a shift in public awareness and concern about energy

consumption. Increasing costs, blackouts, growing

bottlenecks, environmental concerns and geopolitical events in

several energy-producing nations have all contributed to a shift

in public policy towards energy efficiency. As a result, energy

efficiency and green-energy production (including solar, wind

and hydro) is a growing component in national energy policy. In

fact, the Obama administration included $28.3 billion in the 2009

American Recovery and Reinvestment Act for alternative energy

production. However, green energy sources can be intermittent

and scattered (i.e., sunny days are needed for solar energy; wind

power turbines require wind). As such, green energy can be

difficult to integrate into the existing power stream.

But around the world demand for energy is increasing (i.e.,

population growth, larger homes, more air conditioners, larger

appliances, etc.) as is interest in energy efficiency and more

environmentally-friendly products – particularly electric cars.

Given the recent announcements from GM and Nissan about the

new plug-in hybrid electric Chevrolet Volt and the all-electric

Nissan Leaf, it’s not a stretch to predict the electric car market is

poised to become mainstream in the near future. Why Consider

costs for a moment. A 2007 study by the Electric Power Research

Institute found charging a plug-in hybrid electric vehicle (PHEV)

costs the equivalent of roughly $0.75 per gallon of gasoline

compared to the current average price of $2.65 per gallon. 1 In

addition, greenhouse gases could be greatly reduced with the

introduction of PHEV vehicles.

At first glance, one might think meeting future electricity needs

requires building more electricity-generating plants which

are often carbon-intensive. However, a 2006 study by the U.S.

Department of Energy’s Pacific Northwest National Laboratory

estimated roughly 73 percent of the United States’ current light

duty vehicle fleet (which includes passenger cars, pickup trucks,

SUVs and vans) could be charged by our current electrical grid

without building new power plants. While this is welcome news

for auto manufacturers (i.e., new products), environmentalists

(i.e., reducing greenhouse gases), national security analysts (i.e.,

reducing oil dependency) and consumers alike, increasing the

number of electric vehicles (and thus transitioning away from a

carbon intensive economy) will require careful management of the

current electric system. 2

While energy demands are changing with technology

advancements and population growth, our current electric power

infrastructure (known as the “grid”) is dated and strained. In fact,

the power grid hasn’t changed much since it was introduced more

than 100 years ago. The Department of Energy notes our current

grid is the “largest interconnected machine on Earth. . . consists

of more than 9,200 electric generating units with more than

1,000,000 megawatts of generating capacity connected to more

than 300,000 miles of transmission lines.” 3

Despite the sheer size of the power grid, it is largely a one-way

system with electricity transmitted from the plant directly to

customers – with no automated and/or two-way communication

between producers and consumers. The problems associated

with this outdated system are perhaps best exemplified by power

outages. Most grids lack monitoring technologies that inform

power plants of power outages until reported by customers

– something almost unimaginable in this day of connectivity

between consumers and service providers. In addition, low

investment in new transmission lines and an often uncertain

regulatory environment has added to grid congestion in some

areas. 4 In fact, the Department of Energy estimates power outages

and power quality issues cost American businesses more than $100

billion per year. 5

Demand for electricity is projected to increase nearly 26 percent

from 2007 to 2030, or by an average of one percent per year.

The largest increase is expected to be commercial use, followed

by residential and industrial sectors respectively. A growing

population and rising disposable incomes increase demand for

products, services and floor space – all of which increase electricity

consumption. 6


Given the strains placed on the current grid, one solution

is to incorporate digital technology (both computer and

communications technologies) and thus, make the grid “smarter.”

By adding more digital technology, the grid can operate much

more efficiently and reliably, as well as better incorporate

alternative and green energy sources. In addition, transforming

our current grid to a smart grid would make it more responsive,

interactive and transparent for consumers than the current grid. A

smart grid would also help coordinate significant power demands

(i.e., charging large numbers of electric cars), provide real-time

information on a household’s energy consumption and help

utilities better manage their networks. 7

It is ironic that in a country where technology dominates our daily

lives, the United States’ current grid infrastructure is largely stuck



in the 20th century which limits the ability to address peak needs

and future demands. For example, the Department of Energy notes

that if the grid were just five percent more efficient, the energy

savings would equate to eliminating the fuel and greenhouse gas

emissions from 53 million cars. A more efficient grid will also be

more secure by decentralizing sections of the grid – allowing for

regional repairs and minimizing wide-scale outages in the event of

an attack. 8


Given the benefits of transitioning to a smart grid, many might

ask: what would a smart grid actually look like in the first place

From the consumer’s perspective, the smart grid itself would

largely go unnoticed. The utility company would achieve greater

efficiencies (and reliability) in the transmission, distribution

and consumption of energy. In addition, adding digital relays

and sensors give utility companies more visibility into the grid’s

current status and health providing (among other benefits), an

early warning system to help prevent power surges before turning

into blackouts as well as a means to manage production to meet

peak demand. A smart grid also increases options for distributed

generation, bringing generation closer to consumers such as local

wind turbines instead of a distant electricity-generating plant.

The shorter the distance from where energy is produced to where

it is consumed makes it more efficient, economical and possibly

more green.

The next step to maximize the smart grid includes installing

smart meters in individual homes. Smart meters are similar

to current household electric meters, but track electricity

consumption in real-time and transmit usage information back

to the utility company. The continuous connection can inform

utilities of any power outages, as well as complete tasks such as

reading meters, turning residential power on or off and helping

to curb electricity theft. 9

Smart meters in the U.S. have grown to about six percent of

all electricity meters – up from 4.7 percent at the end of 2008

according to research firm Parks Associates. The same study

finds that to date, more than 8.3 million of the devices have been

installed. In addition, that number will approach 33 million within

two years. 10


Up until now, one might have wondered what smart grid and

smart meters have to do with the consumer electronics (CE)

industry. To take the benefits of smart meter technology to the

next level, they will need to connect to smart thermostats, lighting

controls, appliances and other smart technology solutions. Look

for new CE products to be introduced to take advantage of the

smart grid and manage household energy – both retrofit and

custom solutions. Examples might include: chargers that recharge

phones, digital cameras and MP3 players during non-peak times

and battery-like devices that store larger amounts of electricity for

later household use. Consumers could access usage information

from a home device designed especially for that purpose or a

Web-based application. In addition, smart technology would allow

consumers to set their thermostats, automate home lighting and

even select what type of power they wish to purchase (i.e., solar,

wind, nuclear, coal, etc.) and pay accordingly. 11

While the opportunities for our industry are numerous, some

wrinkles need to be ironed out before we are likely to see a

proliferation of smart grid-enabled products. The most significant

impediment to smart technology is the need to restructure how

electricity is typically priced in this country. Currently electricity

is priced at the same rate regardless of demands on the grid.

Hence, consumers don’t necessarily benefit from altering their

behavior (i.e., running the dishwasher late at night when demand

for electricity is lower). If electricity were priced differently

based on time of day demands, consumers would then have an

added incentive to take advantage of smart technology. Studies

have found that when made aware of how much power they are

using, the average household reduces its electricity consumption

anywhere from seven to 15 percent. Ultimately, it should be

possible to have this done automatically, with certain appliances

waiting until energy prices are lower before running. 12 Consumers

would have the ability to override the recommendations from the

smart meter, with the understanding it will cost more to run the

device(s) during high-peak times.

At first glance, one might wonder why a utility company would

want consumers to pay less for electricity, given that electricity is

their business. However, utilities can achieve significant savings

by not sending staff to read individual home meters as well as

identifying and rectifying local transmission problems before they

become wide-scale blackouts. Perhaps more importantly, reducing

peak demand and better managing electricity consumption

reduces the need to build additional power generating plants – a

significant investment.

Hence, a smart grid could change the business model of utility

companies by curtailing expensive investments in new power

generating plants. Reducing investment in building new electricitygenerating

plants can free-up funds for alternative energy

production – thereby increasing earnings and addressing societal

concerns about greenhouse gas emissions and global warming.

In addition, a smart grid can help manage charging of electric

vehicles to the point where millions of electric cars plugged in can

absorb extra excess energy as well as sell power back to the utility if

demand spikes.


While governments around the world are working to make

their grids smarter, standards need to be agreed upon before

we are likely to see significant growth in smart technologies.

Simply put, the absence of an agreed-upon standard creates

noticeable problems when networks and technologies need to

work together to achieve the benefits provided by a smart grid.

Some CE technologies are incorporating a specification known

as Zigbee while others use Z-Wave. Both are wireless networking

technologies for transmitting and receiving control commands

including lighting, HVAC, security and access systems and sensors,

albeit with different technical standards.

Zigbee has been developed by the ZigBee Alliance, a standardssetting

association composed of eight promoter companies

including Ember, Freescale, Honeywell, Motorola, Philips and

Texas Instruments. In contrast, Z-Wave has been developed by one

company, Zensys. However a Z-Wave Alliance has been formed

with a consortium of manufacturers who have agreed to build

wireless home control products based on the Z-Wave standard.

Principal members include Cooper Wiring Devices, Danfoss,

Fakro, Ingersoll-Rand, Leviton and Universal Electronics, among

others. 13 “There is a battle over standards right now,” notes Ian

Hendler, director of business development at Leviton. “This is not

unlike the battle between Blu-ray and HD-DVD. Until the industry

decides upon a specific standard, the industry will likely hold back

introducing products that connect with the smart grid.”


Perhaps the biggest question to be answered is whether consumers

will embrace the smart grid and what it can offer when fully

implemented. Recent CEA research finds that seven in ten (70

percent) Americans are concerned (with 36 percent reporting they

are “very concerned”) about the cost of their monthly electricity

bill. Concern is consistent across adults ages 25 and older (adults

ages 18-24 were less concerned, likely due to fewer adults in this

age group paying utility bills) as well as across all income brackets

and education levels. Given the strong level of concern across a

wide spectrum of the public, one can argue the public will readily

embrace technology aimed at reducing their electricity bills.

Implementing a smart grid and smart meters will not be cheap

however. Smart meters cost roughly $125 each, but additional

costs for installation, software and network implementation for

the utility add to the up-front costs. The recent federal stimulus

included $11 billion for smart grid technology, with $4.5 billion

for smart-technology matching grants gives some insight into the

start-up costs needed to make the grid smarter. 14 Some estimates

suggest that nationwide implementation of smart meters will cost

some $50 billion, but hundreds of billions more will need to be

invested into the current grid infrastructure in the next decade to

meet expected electricity demands. 15

The key to consumer acceptance of smart grid technologies is

consumer awareness and that must include, among other things,

a better understanding of energy consumption (i.e., variable

pricing). Hendler from Leviton notes “without variable pricing,

smart metering is a non-starter.” The good news for smart grid

technology is Americans are very interested in technologies

that will help lower their energy bill and manage their energy

consumption. CEA research finds Americans are interested

in smart meter technology so long as they realize the benefits

associated with it. Specifically, the survey found a majority of

Americans are interested in technology that would lower their

energy bill (69 percent) and manage their energy consumption

(57 percent).

However, consumers also have concerns about smart grid

technology placing restrictions on their use of household

electronics (42 percent), the reliability of such technology (39

percent) and compromising their privacy (39 percent). As such,

communications efforts focusing on the benefits of smart grid

technology as well as addressing reliability, restrictions and privacy

concerns will help sell it to the public. In addition, the public’s

concern and the level of control by the utility company will have to

be addressed in messaging about smart grid technology.

Recent pilot programs have shown that time of day pricing is a

compelling value proposition for many consumers, even though

it is only available in a few markets in the country. A recent test

of smart thermostats in California found that 87 percent of

participants felt the real-time pricing was fair and seven in ten

of those customers chose to continue with the real time pricing

system after the pilot program ended. 16 It is safe to say consumers

are interested in smart grid technologies, but need information to

address their concerns while still promoting the benefits.




Smart grid technologies are a promising solution for America’s

energy needs. By making our nation’s grid smarter, consumers

will have more visibility into how they use electricity and

thus, empower them with options for save energy and money.

In addition, smart grid technologies can help reduce energy

consumption and increase the amount of electricity generated

from renewable sources.

On the policy front, we see growing action on both the state and

federal level for upgrading the grid. From federal and state grants

to pilot programs conducted by utility companies, it stands to

reason we will see a 21st century grid that meets our current and

future energy needs. The introduction of electric cars (among

other technological advances) will need a smart grid, or else

expensive (and oftentimes carbon intensive) electricity plants will

need to be built.

A smart grid can be the launching pad for a host of innovative

products much like the Internet was in the early years. The CE

industry is poised to help make smart grid technology realize its

potential. From lighting controls to home appliances to home

security systems, the industry can provide the solutions that allow

the home to ‘talk’ with the grid – allowing for greater efficiencies

and cost savings. ■

1 “Electric Cars–How Much Does It Cost per Charge”

March 13, 2009. Scientific American.

< http://www.scientificamerican.com/article.cfmid=electric-cars-cost-per-charge>

2 Kinter-Meyer, Michael, Schneider, Kevin and Robert Pratt.

Impacts Assessment of Plug-In Hybrid Vehicles on Electric Utilities and Regional U.S.

Power Grids. Pacific Northwest Laboratory.

3 United States Department of Energy. The Smart Grid: An Introduction.

4 Building the Smart Grid. June 4, 2009. The Economist.

5 Department of Energy. The Smart Grid: An Introduction.

6 United States Department of Energy, Energy Information Agency.

Annual Energy Outlook 2009 with Projections to 2030.

7 The Economist, June 4, 2009.

8 United States Department of Energy. The Smart Grid: An Introduction.

9 The Economist. June 4, 2009.

10 “More than 8 million smart meters installed in U.S.” Smart Meters. July 21, 2009

< http://www.smartmeters.com/the-news/583-more-than-8-million-smart-metersinstalled-in-us.html>

11 The Economist. June 4, 2009

12 The Economist. June 4, 2009

13 Calem, Robert. “Battle of the Networking Stars, Part One: ZigBee vs. Z-Wave.”

Channel Web. October 3, 2005. < http://www.crn.com/digital-home/189501090;jses


14 Taylor, Phil. “Will Americans learn to love ‘smart grid’”

February 27, 2009. New York Times. < http://www.nytimes.com/


15 The Economist. June 4, 2009.

16 Taylor, Phil.


The world is becoming a smaller place and new technologies

are increasing this sense of global connectedness.

Innovation continues to feed new technologies and tech

trends originating from any arena and location are soon adopted

around the world. Even five years ago who would have thought cell

phones could be the key to new businesses opportunities opening

up in developing nations like Africa and India Or that cell phones

could replace a non-existent, or destroyed, communications

infrastructure in places like Afghanistan and Iraq.

New technologies can originate from diverse sources such

as universities, corporations (both home and abroad), the

government (military, civilian and space programs) and from

individuals who have a vision, whether it’s a device, an application,

or a new use for an existing technology. It’s impossible to predict

what will be the next big game changer in the industry but in

this “future” section see some of the technologies that are at the

concept, research and development or prototype stage.


In the fast-paced CE world, it can be easy to forget that sometimes

innovations take time to bear fruit. Take the digital camera, for

example. The world’s first prototype of a digital camera was

created in 1975. Numerous technological advances over the

following decades brought the digital camera to where we are now.

What past concepts and discoveries are nearing perfection


In 1963, Polaroid scientist Pieter J. Van Heerden imagined storing

data in three dimensions. Since this concept was brought forth,

researchers have been working on developing holographic data

storage methods based on this model.

Recent developments in holographic storage might mean that

formats based on this idea are closer than we think. GE Global

Research in early 2009 announced that it had developed a

holographic storage material capable of storing 500GB of data on

a DVD-sized optical disc – ten times the amount that can be stored

on a dual-layer Blu-ray disc. Discs of this size could, in the future,

be used to store 3D video.

GE explains that the technology is still a few years away from being

commercialized, but in this case, time may be on their side. As

consumers continue their shift to a new high-definition format

in the wake of the battle between HD-DVD and Blu-ray, a new

optical disc format could run the risk of inducing format-overload.


The idea of molecules composed entirely of carbon was first

hypothesized by Eiji Osawa of Toyohashi University of Technology

in 1970. The molecules C60 were first discovered by researchers

in 1985, and named ‘buckminsterfullerene’ in homage to Richard

Buckminster Fuller, the architect responsible for developing

geodesic domes, which the molecules resembled. The molecules

name was later shortened to simply ‘fullerene’. Spherical fullerene

molecules are known as ‘buckyballs’, and may have remarkable

technological applications in electronics and nanotechnology.

In 2009, a group of researchers from the University of Cambridge

in the U.K. discovered how to join these buckyballs to create highly

stable ‘buckywires’. Buckywires are believed to be a cheaper, more

efficient alternative to carbon nanotubes, and the method of

polymerization, or chaining, the buckyballs into buckywires using

oil to join the molecules means it may be possible to grow these

buckywires on an industrial scale by dissolving the molecules in a

vat of oil.

Buckywires have potential uses in technology fields such as

photovoltaics. Researchers believe that due to buckywires’ large

surface area and the manner by which they conduct electrons,

they could be incredibly efficient in harvesting power from light

sources. Other applications include wiring for molecular circuit

boards, as well as offering a cheaper, metal-free alternative to the

carbon nanotubes sometimes used for delivering drugs into the

human body.


Geobacter, an anaerobic organism, was discovered in the sediment

of the Potomac River in 1987 by Derek Lovley. Through a process

known as bioremediation, the Geobacter microbe consumes oilbased

pollutants and radioactive material, creating carbon dioxide

as a waste byproduct. However, Geobacter may have further uses

that benefit the environment, as shown by a new strain evolved by

Lovley and research colleagues at the University of Massachusetts


During the process of bioremediation, electricity is produced. In

the case of the new strain developed earlier this year by Lovley’s

team, the amount of power produced is increased eight-fold.

According to Lovley, this output level allows for the design of

microbial fuel cells that convert wastewater and renewable biomass

into electricity through the microbe’s bioremediation process.

Lovely’s work is supported by the Office of Naval Research and



the U.S. Department of Energy, but is only one of a multitude of

research projects aimed an energy generation and efficiency.


Energy is paramount in every corner of the world. In developed

countries, the primary concern is to develop alternatives to fossil

fuels and to make better use of resources through more efficient

power usage. In developing countries where people may not

have access to a power grid, they have to be creative in finding

alternative ways to power existing and new technologies.

A review of some of the technologies under development in labs

around the world may provide insight to how we can power

portable CE devices in the future.


In Germany, a group of scientists has developed an ultra-thin,

printable battery that weighs less than one gram. These batteries

are printed using a silk screen method – similar to the way slogans

are transferred onto t-shirts. A paste is pushed through a screen

onto a template substrate, resulting in a “printed” battery that is

only slightly thicker than a human hair.

Not only is the battery thin and lightweight, it also contains no

mercury, so it’s environmentally friendly. Although at the moment,

the batteries can only deliver a 1.5V charge that doesn’t hold it for

long – enough to say, power a musical greeting card – the concept

and production method holds potential for use in small portable


“ NEC in Japan is working on developing

a flexible, light and rapidly rechargeable

(30 seconds) battery that could power

portable CE products.”



This next technology is available now. In fact the concept version

was developed and demonstrated at England’s famous Glastonbury

rock festival this year. So if you love to go camping miles away

from civilization (and power outlets) and you love using your CE

products, the two no longer have to be mutually exclusive thanks

to the solar tent. Literally a tent with panels made from a fabric

that has solar threads woven right into it, the pitched tent (angled

towards the sun) gathers solar energy during the daytime and

stores it for whenever you are ready to use it to say power your

laptop or charge your mobile phone. The solar tent, which was

developed by Orange, a UK-based wireless carrier even offers “Glocation”

that allows you to text your tent, causing it to glow so that

you can easily find it. At night, campers can slip their phone in a

pouch inside the tent to charge via magnetic induction technology

with no power cables needed.



It’s still a concept, but NEC in Japan is working on developing

a flexible, light and rapidly rechargeable (30 seconds) battery

that could power portable CE products. The organic free radical

material is a polymer or type of plastic. The polymer is permeated

with electrolytes that create an electrical charge. Because of the

extreme flexibility of the polymer gel, organic free radical batteries

could literally be stuffed into any available space inside a device.


Mobile giant Nokia is always looking to stay ahead of its

competitors. At the Nokia Research Centre in Cambridge, England

their researchers are working on a cell phone that would seemingly

pull its power source from thin air. The technology involves

drawing power from ambient radio waves that surround us to

charge a wireless device.

At the moment the prototypes developed by Nokia collect and

convert three to five milliwatts but the research team is working

on a model that could garner 50 milliwatts. It works like a higher

powered version of a radio frequency identification (RFID) tag

by turning electromagnetic waves into electrical current. The

technology requires that the device being charged is turned off

during charging as the dual passive circuit’s technology requires

it to harvest more energy than it expends. To increase the amount

of power being captured, the researchers are using a wideband

receiver that can pull signals from 500 megahertz to 10 gigahertz.

Nokia expects to use this method of powering phones as an

ancillary one, combined with perhaps solar cells on the outside of

the phone. They expect to have this product in the marketplace

within three to four years.


Another strong contender in the “never needs to be plugged in to

charge” arena is the kinetic phone. From DesignerID in Lisbon,

Portugal the Atlas kinetic concept phone has a unique appearance

that challenges the ascetic sense of style. It incorporates weights,

rotors and springs that generate power when the phone is moved

that in turn powers a generator that recharges the phone. If this

technology sounds familiar, it’s because it’s the same principle

used in kinetic watches, though of course a cell phone needs more

power. At the moment, the design is still in concept form with no

known plans for going to market. It will likely appeal to the same

people who own kinetic watches.



Further out on the mobile phone horizon is another product

from Nokia research that utilizes nanowire grass. The entire outer

surface of a phone would be covered in this nanowire grass which

in turn is covered with a biomolecule that works in a similar way

to plant photosynthesis – literally turning sunlight into electrical

power. The ultimate “green” phone perhaps

Last year, CEA’s Five Technology Trends to Watch covered “control”

– how humans interact and communicate with their CE products.

There are a couple of new developments on the international scene

that will likely affect how we control our devices in the future.


Increasing types of CE products now offer the luxury and

convenience of touch screen technology. Japan’s NHK

Broadcasting Corp., along with engineers from Tokyo University

are trying to make the touch screen experience more satisfying

and accurate for the user. This Tokyo team designed a prototype

touch screen with millions of minute pins on the surface.

The arrangement of the pins will be determined by complex

programming built in to the device. The pins can also be pressed,

like switches, and could be used like keyboards or buttons. One

possible use is to display Braille characters on a screen – making

devices easier and more accessible for blind people. NHK predicts

it will be about ten years before this technology is in general use.


At the Fraunhofer Institute for Photonic Microsystems IPMS in

Dresden, Germany, scientists are working on the ultimate handsfree

control method by turning eyeglasses into an instrument of

control. Unlike the existing “heads-up” or head-mounted displays

(HMDs) used in fighter jets, Formula One race cars and virtual

reality games, these glasses allow the user not just to receive

information but also to direct or control actions. Fitted with

CMOS (Complementary Metal-Oxide-Semiconductor) chips in

the hinge at the temple of the glasses, an image is projected onto

the retina of the wearer. The image, utilizing OLEDs (Organic

Light-Emitting Diodes) bright enough to be seen over ambient

light and different backgrounds, appears to the wearer. The CMOS

chip provides an eye tracker in the display. CMOS chips are perfect

for this application being small, light and relatively inexpensive.

The user can then exercise control by moving their eyes or fixing

on certain points in an image. This method of control, once

perfected, could be used in conjunction with a number of CE


“ Researchers at the Massachusetts

Institute of Technology (MIT) have

developed a fabric, composed of lightsensitive

fibers that function as a camera.”


Researchers at the Massachusetts Institute of Technology (MIT)

have developed a fabric, composed of light-sensitive fibers that

function as a camera. Yoel Fink of MIT explained the process,

by which the fibers reproduced an image of a smiley face, in

the journal Nano Letters. A mesh of such optical fibers could

distribute the task of recording an image across the fabric in order

to minimize the impact of a damaged area. This could lead to

military applications of the technology, allowing a soldier to see

reproduced images from different directions, according to Fink.

Possible CE developments could eventually stem from this.



Netbooks are the hot new CE product category. But it’s too early to

dispense with the more powerful but heavier and energy hungry

laptop computer, especially as they continue to evolve into lighter,

more stylish and more easily powered versions.


Felix Schmidberger, an independent designer based in Stuttgart,

Germany, created the Compenion, which is like a larger version

of a slider-style mobile phone. The computer is comprised of

two OLED panels that slide up next to each other with one panel

operating as a keyboard or scribble pad and the other as the

display. The resulting work surface is only three-quarters of an

inch thick and lightweight but yields a large workable surface. The

Compenion is charged wirelessly by being placed on an inductive

power pad, so no tiresome plugs-ins are needed.


For some time, solar powered laptops have been a dream but the

challenge is to capture and store enough rays to meet the power

needs of an operating laptop. Serbian designer Nikola Knezevic

believes he can overcome these limitations by designing the “Solar

Laptop Concept” that comes with an extra solar cell-covered

lid to gather more power. The hinged lid adds only a few tenths

of an inch to the thickness of the laptop which means that the

device opens up like a bi-folded piece of paper that can be angled

towards the sun. Although the design is elegant it is still not yet

able to completely power a laptop. If the chip industry continues

to develop more efficient processors requiring less cooling and

therefore, power, it could be a contender for the first all-solar



Other physically miniscule, yet immensely promising

breakthroughs have been made in areas such as microchip design

and quantum computing. A number of these advances could

revolutionize how we build, operate and interact with electronics.

Quantum computing is a concept that has long been studied for its

technological promise. The idea being that a quantum computer,

differentiated from traditional systems by its use of quantum

mechanics to perform operations on data, hypothetically could

solve problems at a much greater speed than classical computers.

Two recent breakthroughs in the quantum computing field bring

such a system closer to reality.

First, a team led by researchers at Yale University has created

the world’s first electronic quantum processor. This solid-state

quantum processor is composed of two ‘qubits’ (quantum bits),

and was used in successful computations of elementary algorithms.

According to Professor Robert Schoelkopf, this is the first time that

such tasks have been completed on an all-electronic device.

Another important step made this year on the road to the

quantum computer was the creation of an optical transistor from

a single molecule. Quantum optics deals with the application of

quantum mechanics to light and matter, and could allow for the

production of integrated circuits that operate functioning off

photons rather than electrons. Such integrated circuits would

enable faster data transfer and reduced heat dissipation. The

optical transistor created by researchers at ETH Zurich is roughly

two nanometers in size and produces an almost negligible amount

of heat. The small size of the transistor would allow for more data

to be placed on a single chip.



Another approach to minimizing the very building blocks of our

computing systems is being developed by IBM. The company

has begun looking to human DNA to provide the framework

for microchips. According to a paper published in Nature

Nanotechnology, IBM is using artificial DNA nanostructures as a

framework to build the tiny microchips used in electronic devices.

These structures provide a reproducible, repetitive pattern for

semiconductor processes, but such processes are more than 10

years away, according to IBM Research Manager Spike Narayan. ■


The International CES has long been the springboard for

innovation in technologies. Each year, companies large

and small use the show to reveal to the world their most

cutting-edge products, content and services. The Consumer

Electronics Association (CEA)®, the organization behind

the International CES encourages innovation as the key to

the continued growth and success of our industry and has

launched the Innovation Movement. Visit: http://innovationmovement.com/

to learn more about the importance of

protecting and promoting innovation in the U.S. and around

the world.












CEA’s Innovation Checklist was delivered to Congress earlier

this year. CEA asked Congress to evaluate each bill by the

following six criteria:


the U.S.

Questions Please contact us:

Consumer Electronics Association

1919 S. Eads Street

Arlington, VA 22202




1919 South Eads Street

Arlington, VA 22202

866-858-1555 toll free

703-907-7600 main

703-907-7601 fax


More magazines by this user
Similar magazines