Friday, January 24, 2014

Ford is working with MIT, Stanford to build “common sense” into self-driving cars

Ford Motor Company is teaming up with the Massachusetts Institute of Technology and Stanford University to research the future brains of its autonomous cars. Projects like Ford’s research vehicles are putting the sensors and computing power into cars that would allow them to read and analyze their surroundings, but these two universities are developing the technology that will allow them to make driving decisions from that data.
“Our goal is to provide the vehicle with common sense,” Ford Research global manager for driver assistance and active safety Greg Stevens said in a statement. “Drivers are good at using the cues around them to predict what will happen next, and they know that what you can’t see is often as important as what you can see. Our goal in working with MIT and Stanford is to bring a similar type of intuition to the vehicle.”
In December, Ford unveiled its latest research vehicle, a Ford Fusion Hybrid equipped with Lidar (laser-radar) rigs, cameras and other sensor arrays, all intended to generate a real-time representation of the world around the car. Such a car can “see” in all directions, allowing it not only to take in far more stimuli than even the most alert driver, but also to react to that information far more quickly. That’s where Stanford and MIT come in.
The Ford Fusion research vehicle from Lidar's point of view
The Ford Fusion research vehicle from Lidar’s point of view
MIT is developing algorithms that will allow an autonomous driving system to predict the future locations of cars, pedestrians and other obstacles. It’s not good enough for a car to merely sense the location of nearby vehicles when it switches lanes or swerves to avoid an accident. It has to know where those vehicles will be a split-second later. Otherwise the car will avoid one accident only to cause another.
That means not only measuring other vehicles’ current speed and trajectory but anticipating how their drivers – or their autonomous vehicle systems – will react to the situation. Basically MIT is trying to create a vehicle brain smart enough to assess risks and outcomes and navigate its course accordingly.
Stanford is doing something a bit different. It’s trying to extend the sensory field of the car by helping it see around obstacles so it can react to dangers the driver can’t immediately see. Stanford and Ford didn’t offer any specifics on just how they would accomplish that feat, by my bet is it has to do with Ford and the automotive industry’s work on inter-vehicle networking.
Cohda Wireless autonomous car
Future autonomous cars won’t just be able to sense their surroundings, they’ll be able to communicate with other vehicles using a secure form of Wi-Fi. For instance, Australian startup Cohda Wireless is developing to vehicle-to-vehicle networking technology that would allow two cars to let each other know they’re approaching one another at a blind intersection.
Ford and other major automakers are working with the University of Michigan and the National Highway Traffic Safety Administration to build vehicle-to-infrastructure grids that would allow cars to tap into highway sensors, giving them a kind of omniscient view of the overall road. With such technology other cars could reveal their intentions before they even take action, making other connected vehicles much more responsive. They could also share their sensor data, so even if only one of the cars far ahead of you is connected to the vehicle grid, that lone vehicle could still tell you what the other cars around it are doing.
While every major automaker is working on autonomous driving technology, Ford has been particularly aggressive. In a recent interview, executive chairman Bill Ford told me how the automaker is trying to use connected vehicle technology to propel the company into a new golden age of automotive innovation.

source: http://gigaom.com/2014/01/22/ford-is-working-with-mit-stanford-to-build-common-sense-into-self-driving-cars/

The internet of things needs a new security model. Which one will win?

The Target data breach occurring over compromised point-of-sale terminals. The recent news that a botnet army which sent 750,000 spam emails included a refrigerator. The discovery of a Linux worm that could infect security cameras. In the last two months all of these headlines have served to stoke fear over the vulnerability of connected devices and current security practices. Much like the cloud has allowed denial of service attacks to grow in might, the array of relatively dumb and unsecured connected devices threatens to participate in botnets, leak data or act as a weak point for hackers to target.
And when it comes to securing the internet of things, it’s likely that the current methodologies will have to change, given the characteristic of how a connected and interconnected world works. Instead of keeping bad guys out, the zeitgeist is moving toward assuming everything is compromised and working out a way to prevent attacks from becoming a success or figuring out a way to establish and then re-establish a trusted environment.
Target store
This is hard. But first, let’s focus on some of the things that make the internet of things such a challenge to secure in the first place.

Why isn’t the internet of things secure yet?

  • Promiscuity across networks. Because devices are not only expected to talk to the internet, but also with each other that means that every node on the network is a potential weak point — and depending on whose numbers you believe those devices will number in the 30 to 50 billion in the next five or six years. You aren’t only securing the internet of things from dangers that might attack it over the public internet, but because most connected device networks are mesh networks, you must secure a bad node from attacking or co-opting other devices on the same mesh.
  • Connected devices are stupid. As this post from Gartner points out, not all connected devices are like smartphones or even packing the computational power of a 32-bit microcontroller. That means tasks like encrypting data are going to be impossible and any type of security must be lightweight.
  • The owners of connected devices are stupid. Fine, they may not be stupid, but they certainly aren’t using password generators or even making sure their hardware is up to date or changing the admin password on the devices. Many consumer connected devices have to be dead simple and have security to match. And of course, if the trade-off is between security and convenience (two-factor authentication? No way!) security will lose.
  • The great unknown. We haven’t figured out how we’re going to get devices to talk to each other and to automate our workplaces and lives yet. It’s really hard to secure an amorphous concept, which is pretty much what most implementations of the internet of things looks like today. Sure, there are closed systems that may feel more secure, but if we accept that the goal here is to build services on top of hardware and software that shares its data, then those closed systems are going to look like relics of a quaint and forgotten past. But so far, we don’t know what will evolve, what protocols it will use and what ways to build out the system will win.

Which framework wins out?

There are many, many more issues some of which are subsets of these and others that are just crazy, like the idea of denial of power attacks by which an attacker sucks an essential sensor battery dry. So how will we secure this?
One idea gaining ground is that we will accept that the system is insecure and then develop software and procedures to determine what we can trust on the fly. I have no idea what it might look like, although my friend Jason Hoffman at Ericsson likened it to a Turing test for security that devices might perform. It has the same underlying assumption that influences Netflix’s Chaos Monkey concept, which is to assume systems will break and prepare for it in all manner of ways.
In a related concept, perhaps instead of stopping data breaches we’ll stop those who profit from them, from actually making money. This week, Shape Security, a startup founded by some ex-Googlers, launched a product that tries to prevent people from mass-charging goods at online retailers. Shape’s magic is that it can generate a dynamic and ever-changing version of the HTML, CSS and Java on a web page while still keeping the front-end looking the same.
The benefit of this is the hackers who have stolen credit card information can’t write scripts that automatically fill out the order forms on web sites like Amazon or Wal-Mart. When you’re trying to monetize 30 million stolen credit cards, you aren’t entering that data by hand.
webcam
And finally there’s the concept of designing with security in mind, which is of course a lot harder than it might seem. But this is the approach most security researchers are advocating, with some even encouraging government agencies to impose fines of CE companies if their products are hacked. This might involve using chips that have trusted zones to store sensitive data or rewriting the firmware for these devices with far more secure code. Many attacks on security cameras and routers are hacked via the firmware.
It’s not an area that gets much investment because, until now, it was something the user doesn’t see. It’s like not dressing up for a conference call taken from the home office — it doesn’t matter until suddenly the conference call becomes a Google Hangout or video conference. Once these embedded devices started connecting to the internet they were switched from voice to video and everyone could see their flaws.
Other elements of designing for security might be limiting access, or securing how the device talks back to the cloud and making sure the servers it talks to are secured. It might be the locked-down version of security we’re familiar with today, or it might mean implementing that type of Turing test to ensure it’s secure before transmitting information.
Basically, security models change over time in the IT realms and, as we enter a new realm with more nodes, differing interconnections, normal users and dumb devices, we’re going to have to adapt. Let’s talk about how.
 source: http://gigaom.com/2014/01/22/the-internet-of-things-needs-a-new-security-model-which-one-will-win/

Anatel is preparing a regulation to backhaul wireless high speed

Anatel will put on public consultation a standard for the use of 70 and 80 GHz bands for new applications in telecommunications. In practice, it is a little late movement, at least on a piece of spectrum that is already used in the deployment of wireless connections for high speed.

The use of the 71-76 GHz and 81-86 GHz bands was approved by the regulator of telecommunications in the United States, the FCC, still in 2003, from an order of the equipment manufacturers. Here, the FCC was also caused by the market.


"These studies are the private sector, where the Telefonica / Vivo and also by manufacturers, who proposed a regulation for use of these bands. The proposal is for any telecommunications services in point-to-point applications, on a primary basis without exclusivity, "summarized The draftsman, Marcelo Bechara.

As the advisor explains, this band is used for connecting fixed points, serving as wireless backhaul, "utility that can be considered the most important of the present moment," Bechara said, noting ongoing deployment of 4G services.


"Regard to the recently auctioned the 4G radio and the need of transmission rates at the output of the high ERBs are, it implies that the backhaul should be implemented preferably with fiber. But in some cases it is not possible or feasible, and these bands 70/80 GHz are presented as an alternative, "he explained.

In fact, neither surprising that the application has been forwarded by Telefonica. It's that the band is used exactly for these wireless high speed in big cities, where the cost for grounding optical fiber can derail some investments.


  The very high frequency has some technical difficulties - like the use of microwave - but has been adopted in the U.S. since at least 2006. In essence, are devices that can be placed on top of buildings and in need of-sight to the transmission.

In general, the difficulties with implementation leads to higher bands to cover distances of about 3km, but the result appears to work, as there are different equipment suppliers. In the next 1.5 km distances, this system comes to achieve transfer rates of 1Gbps. The public consultation Anatel should be open next week and receive contributions for 45 days. The agency also plans to hold a public hearing on the proposal in Brasilia.

Virgin Mobile signs agreement with Vivo and asks MVNO license in Brazil

In a press release distributed on Thursday, 23/​​01, the VMLA - Virgin Mobile Latin America - in short, announces the launch of its operations in Brazil and Mexico. The company already operates in Chile and Colombia.

In Brazil, the operation, the MVNO (virtual network) model, the service provided will be possible after the network share with Vivo / Telefonica agreement. Virgin Mobile today announces that come with Anatel in order to act as MVNO in the country. There is no official forecast for the start of operations.


It is worth remembering the MVNO business in Brazil bump, exactly, the difficulty for companies interested in close network usage agreements of major operations. First MVNO market in Vivo, this is the third contract sharing operator in the country - the other two were signed with Claro and Nextel for 2G and 3G services.

As a MVNO, Virgin Mobile and Virgin Mobile Mexico Brazil will use local networks Vivo / Telefonica in each country, in addition to controlling the entire customer relationship. Virgin Mobile is the creator of the MVNO model in which the brand currently has more than 18 million customers in 10 countries. Here, the company will face Porto Seguro and Datora Telecom, which already have consolidated MVNO operations.

Apps let the 'confinement' of tablets and smartpthones

The mobile application will reign among the tools of computing in 2017. The expectation is that Gartner will be downloaded over 268 billion applications in five years, which will generate revenue of more than $ 77 billion. Also according to the consultancy, mobile users provide custom streams to over 100 applications and data services everyday.

While Facebook and Twitter have had a great influence on the willingness of users to share personal information with others, companies in emerging areas such as health screening, "smart" technology for residential applications and cars will drive a new world of apps that can take our data and analyze them in depth, "argued the consultancy study.


"In the next three or four years, the apps will not simply confined to smartphones and tablets, but will impact a larger group of devices, from residential applications to cars and wearable gadgets (wearables)," says research director at Gartner, Brian Blau. "By 2017, Gartner predicts that wearable gadgets will drive 50% of all interactions of apps."

Since appliances such as refrigerators and thermostats, or bracelets do not have a screen or central interface in them, apps and other programs are needed as drivers for exchanging data between users and the company or the product, pointing to Gartner. "As users continue to adopt and interact with applications, their data is - what they say, what they do, where they go - that are transforming the paradigm of interaction with apps," adds Blau.

Anatel attempts to accelerate review of 'relevant markets'

Front of "several" requests for the Anatel to reassess the definitions of the relevant markets, the Board of the agency, at a meeting held on Thursday, 23/​​01, ruled that the Superintendence of Competition accelerate studies on this subject to be possible to have the review completed by next November - deadline for any changes in PGMC (General Plan competitive).

"We will include the Competition Superintendence immediately begin examining reassessment of offers in the relevant markets, especially in regard to undertakings with significant market power. We'll leave the finer PGMC until November, which is the limit of the revision of the assumptions placed on it, "explained the president of Anatel, Joao Rezende.


 In PGMC, the agency adopted rules of asymmetric regulation, which in practice means giving greater weight on the obligations of those operators that have significant market power, ie, are large to the point of influencing the functioning of the market in certain localities.

According indicated Igor de Freitas, who is the rapporteur of the particular case that prompted a discussion of the collegiate advisor companies have good chances of success. "The technical department says that in some cases there is evidence of improper characterization in certain relevant markets. It is prudent to prioritize these processes that asked revaluation "he said.


The Standard took two years term for these "relevant markets" were reassessed. Then Rezende address the demands as "a moment of reaffirmation of PGMC". "The Superintendence of Competition is with the number of requests OI, TIM, Telefonica, on review of the relevant markets", amended president.

As an application of CTBC has reached the Board, the case was used to provide guidance to the technical area. What happens is that the operators do not agree with the indications given by Anatel and therefore have applied to several cases of "relevant market" are reviewed - the geographical locations where this or that operator has the Significant Market Ruling.

Thursday, January 23, 2014

VMWare does purchasing on enterprise mobility

VMWare reported on Wednesday , 22 / 01 , which will buy the security company AirWatch Mobile for about $ 1.18 billion in cash and approximately $ 365 million in payments in installments . According to the statement, AirWatch will become a unit of VMWare and its employees will continue to respond to the founder and CEO of AirWatch , John Marshall .
Specialty es corporate mobility, AirWatch was attentive to the Latin American market , despite having no office in Brazil . The company was conducting business in the region from Miami . "In the last two years , we have grown rapidly in Latin America. We have over 25 employees serving customers and partners in the region exclusively in their local languages ​​, " says Cesar Berenguer , director of new business for the AirWatch Latin America.
" There is a great demand for mobility throughout the region . Companies are choosing AirWatch for the development of their mobile initiatives due to our ability to accelerate the growth of business and the services we offer in different languages ​​. We opened a new office in Miami , the current hub of Latin American business , and we can offer our clients a unique service and support in Spanish and Portuguese . "

Phablets: OTT content boosts sales

The phablets - which are the devices with mobile phone capabilities, but with screens from 5.6 inches, reaching up to 7 inches, smaller size of a tablet - will win a share of the market, suggests Juniper Research. Segunod consultancy, in 2013, 20 million units were produced worldwide phablets. In 2018 will be 120 million, which will mean a boost of over 500%.

The reason for the growth is the demand in East Asia, in countries such as South Korea, where there is desire for large screens to run games, and in China, where values ​​a better screen quality for streaming content over-the-top (OTT).
With all this, Juniper says the market phablets can become a growth area for suppliers of smartphones already established with target consumers passionate about technology, citing recent launches from companies such as Nokia and Alcatel.

As there is not, at least so far, no representative of Apple in this niche market phablets will be dominated by two operating systems: Android and Windows Phone. Notably, the system will be promoted by Microsoft Lumia line, Nokia, Juniper and believes that this player will have better performance particularly in developing countries. The side of the Google system, the leadership will be the series Galaxy Note, Samsung.

Alcatel-Lucent and BT test transmission to 1.4 Tbps

Alcatel-Lucent and the British operator BT announced that they had reached in test performed in London, speeds of 1.4 Tbps in optical fiber transmission with spectral efficiency of 5.7 bits per second per Hertz. According to the companies, would be 'the fastest connection ever reached in commercial real hardware environment'. The speed would be equivalent to the transmission of 44 movies in high resolution in one second.

The field test was done in a link of 410 km between the BT Tower in the English capital and the company's research campus in Suffolk fiber. By expanding the density of channels, the experience would have enhanced the efficiency of transmission by 42.5% compared with existing networks.

According to a statement from Alcatel-Lucent, "the increase in capacity occurred with existing optical fiber, potentially reducing the cost of deploying more fiber with the increasing demand for bandwidth."

Lenovo purchase IBM servers largest acquisition deal in China's IT

BEIJING, Jan 23 (Reuters) - Lenovo Group, the largest PC maker in the world, has agreed to buy a unit of IBM servers in a deal expected long ago and valued at $ 2.3 billion, the largest acquisition ever taken by a Chinese technology company.

Lenovo will pay 2.07 billion dollars in cash and the remainder in shares of computer maker based in Beijing, the company said in a statement to the Hong Kong stock exchange on Thursday.

The agreement goes beyond the acquisition by Baidu's 91 Wireless, formerly owned by NetDragon Websoft, for 1.84 billion dollars last year, according to Thomson Reuters data, and highlights the growth of the country's tech companies as they evaluate international expansion.


The acquisition will enable Lenovo to diversify its revenues beyond the distressed segment of PCs and to refashion itself as a growing force in mobile devices and data storage servers.

The sale enables IBM to get rid of your x86 low margin business, which sells less powerful and slower than the offerings of higher margin company servers, and focus on the shift to more profitable software and services.

The acquisition by the Lenovo ThinkPad PC business of IBM in 2005 for $ 1.75 billion became the springboard for the company to reach the top of the world ranking of manufacturers of PCs

The market is betting that Lenovo will have a similar success with his new acquisition, which is partly reflected in the valuation of 9.44 percent in its stock this year.