Google is the latest company to brush off most of the Wikileaks vulnerabilities

Wikileaks dumped thousands of alleged CIA documents online yesterday that contained lists of vulnerabilities in popular tech products, sending companies scrambling to make sure their security patches were up-to-date. But as companies reviewed the documents, it became clear that most of the vulnerabilities they contained were outdated.

Apple first dismissed the majority of the listed iPhone vulnerabilities in a statement last night, and now Google and other firms are following suit.

“As we’ve reviewed the documents, we’re confident that security updates and protections in both Chrome and Android already shield users from many of these alleged vulnerabilities. Our analysis is ongoing and we will implement any further necessary protections. We’ve always made security a top priority and we continue to invest in our defenses,” Google’s director of information security and privacy Heather Adkins said in a statement.

Finding flaws in iPhones and Android devices was important to the CIA’s mission of surveilling targets because the security problems could allow the agency to eavesdrop on users’ communications.

It’s important to note that, although Google and Apple both say that most of the vulnerabilities are fixed, that doesn’t mean all of them are. Users concerned about the security of their devices need to make sure they’re updating to the latest software to get all of the security patches.

The Wikileaks disclosure has reignited a debate over whether U.S. intelligence agencies should disclose software vulnerabilities to companies so they can be fixed, or hoard them so they can be used for spying.

Mozilla’s chief legal and business officer Denelle Dixon highlighted the importance of disclosure in conversation with the New York Times. “The C.I.A. seems to be stockpiling vulnerabilities, and WikiLeaks seems to be using that trove for shock value rather than coordinating disclosure to the affected companies to give them a chance to fix it and protect users,” Dixon said. “Although today’s disclosures are jarring, we hope this raises awareness of the severity of these issues and the urgency of collaborating on reforms.”

 Many tech industry advocates believe that the government has a responsibility to protect American businesses and consumers by notifying companies of security flaws, rather than keeping them secret and exploiting them. The Obama administration pushed a vulnerabilities equity process to help government agencies determine when to disclose vulnerabilities to companies, but the Wikileaks documents raise questions about whether the VEP is effective.

“The White House vulnerabilities equities process spells out what the government should be doing when it comes into possession of 0-days,” Alex Rice, chief technology officer of HackerOne, told TechCrunch. “It’s unclear if it’s been honored properly in this case. Were these vulnerabilities handled in the way outlined by the previous administration? And if not, what do we do about that? Was the process illegitimate to begin with? It’s restarting a conversation we thought we had a clear answer to.”

Rice, who worked on Facebook’s security team before helping launch the bug bounty platform HackerOne, said the vulnerabilities Wikileaks reported in Samsung smart TVs had a personal impact on him: Wikileaks claimed the CIA spied on targets through their TVs, and Rice has a Samsung TV facing his bed. “I’m not worried about the CIA eavesdropping on my television. If the CIA is going to conduct espionage on me, they have more than enough means to do so. What I am concerned about, if the U.S. government knows I have vulnerable tech in my bedroom, that has direct implications to my privacy. That’s something I should know about as a taxpayer,” Rice explained.

After all, if the CIA discovers a security vulnerability in a popular product, it’s only a matter of time before hackers or other nations’ spy agencies find it too. The CIA knew it had been breached late last year, according to a Reuters report, which calls into question why Apple, Google, Samsung and others weren’t alerted sooner.

“Eventually these vulnerabilities are not going to be secret any longer,” Rice said. “How are we going to minimize the damage when that happens? This leak is proof of that. We are all at a disadvantage if Wikileaks has access to a 0-day in iPhone, Android, or Samsung TV.”

Google crams machine learning into smartwatches in AI push

Google is bringing artificial intelligence to a whole new set of devices, including Android Wear 2.0 smartwatches and the Raspberry Pi board, later this year.LG Watch Sport

A cool thing is these devices don’t require a set of powerful CPUs and GPUs to carry out machine learning tasks.

Google researchers are instead trying to lighten the hardware load to carry out basic AI tasks, as exhibited by last week’s release of Android Wear 2.0 operating system for wearables.

Google has added some basic AI features to smartwatches with Android Wear 2.0, and those features can work within the limited memory and CPU constraints of wearables.

Android Wear 2.0 has a “smart reply” feature, which provides basic responses to conversations. It works much like how predictive dictionaries work, but it can auto-reply to messages based on the context of the conversation.

Google uses a new way to analyze data on the fly without bogging down a smartwatch. In conventional machine-learning models, a lot of data needs to be classified and labeled to provide accurate answers. Instead, Android Wear 2.0 uses a “semi-supervised” learning technique to provide approximate answers.

“We’re quite surprised and excited about how well it works even on Android wearable devices with very limited computation and memory resources,” Sujith Ravi, staff research scientist at Google said in a blog entry.

For example, the skimmed down machine-learning model can classify a few words — based on sentiment and other clues — and create an answer. The machine-learning model introduces a streaming algorithm to process data, and it provides trained responses that also factor in previous interactions, word relationships, and vector analysis.

The process is faster because the data is analyzed and compared based on bit arrays, or in the form of 1s and 0s. That helps analyze data on the fly, which tremendously reduces the memory footprint. It doesn’t go through the conventional process of referring to rich vocabulary models, which require a lot of hardware. The AI feature is not intended for sophisticated answers or analysis of a large set of complex words.

The feature can be used with third-party message apps, the researchers noted. It is loosely based on the same smart-reply technology in Google’s messaging Allo app, which is built from the company’s Expander set of semi-supervised learning tools.

The Android Wear team originally reached out to Google’s researchers and expressed an interested in implementing the “smart reply” technology directly in smart devices, Ravi said.

AI is becoming pervasive in smartphones, PCs, and electronics like Amazon’s Echo Dot, but it largely relies on machine learning taking place in the cloud. Machine-learning models in the cloud are trained, a process called learning, to recognize images or speech. Conventional machine learning relies on algorithms, superfast hardware, and a huge amount of data for more accurate answers.

Google’s technology is different than Qualcomm’s rough implementation of machine learning in mobile devices, which hooks up algorithms with digital signal processors (DSPs) for image recognition or natural language processing. Qualcomm has tuned DSPs in its upcoming Snapdragon 835 to process speech or images at higher speeds, so AI tasks are carried out faster.

Google has an ambitious plan to apply machine learning through its entire business. The Google Assistant — which is also in Android Wear 2.0 — is a visible AI across smartphones, TVs, and other consumer devices. The search company has TensorFlow, an open-source machine-learning framework, and has its own inferencing chip called Tensor Processing Unit.

Google is killing its bold Hands Free payment experiment

When Google launched Android Pay at its I/O conference back in 2015, it also teased a program that let you keep your phone in your pocket and still go through the normal checkout process. Called Hands Free, the limited pilot used the phrase, “I’ll pay with Google,” to alert the cashier that you wouldn’t actually be using a physical form of hands free

Google has announced that it is shutting down the service on Feb. 8, which launched last spring on iOS and Android. Available only at select locations like McDonalds and Papa Johns in the Bay Area, the program required users to upload a photo in the Hands Free app and utilized Bluetooth, Wi-Fi, and location services in your phone to identify when you were at one of the participating locations.

According to Google’s description of the service, “Then, if you purchase from a store that uses a Hands Free camera, Google will confirm your identity automatically by detecting specific patterns from the template created during signup. The cashier will initiate the charge and you’ll get a notification on your phone after the charge is complete.” During the transaction, the cashier would only see the user’s initials, first name, and photo, keeping payment information and credit card numbers hidden.

Contactless payments have been rapidly spreading across country, and Google’s idea with Hands Free was to “explore what the future of mobile payments could look like.” While it’s not entirely clear why Google is stopping the program, it writes on the Hands Free website that “we’re now working to bring the best of the Hands Free technology to even more people and stores.”

Unfortunately, Hands Free never made it out of pilot mode and was extremely limited, so there’s a good change you’ve never used or even heard of the program. However, the concept of being able to pay quickly and securely without pulling out your phone or reaching for your wallet is certainly intriguing, and it’s likely that Google will take what it learned and apply it to Android Pay down the road, perhaps tapping Google Assistant as it works to bring the service nationwide.

This story, “Google is killing its bold Hands Free payment experiment ” was originally published by Greenbot.