Graphcore’s AI chips may not be as powerful as Nvidia’s GPUs, but can get your money’s worth • The Register
In short The latest results from the MLPerf benchmarking consortium, plotting the best chips to train the most popular neural networks, have been released, and a new player has entered the game: Graphcore.
Each version of MLPerf is pretty standard. A sprawling spreadsheet records the time it takes for various systems to train or run a particular machine learning model; these numbers are submitted by the hardware suppliers.
Nvidia and Google are almost always ahead of the pack, so the latest results aren’t particularly surprising. What’s different this year is that Graphcore joined for the first time. It’s a good sign; this signals that their technology is maturing and ready to publicly compare itself to its competitors.
While Graphcore’s IPU-PODs have not been as quick to train the ResNet-50 computer vision model and the BERT language model as Nvidia’s A100 GPU or Google’s latest TPUs, the hardware from the business is much cheaper, so it may have a performance advantage per dollar. Google TPUs are only available through the cloud.
You can see the full results here, and more on Graphcore here from our partner site, The next platform.
Say goodbye to Pepper the robot
SoftBank has stopped producing its humanoid robot Pepper and is cutting jobs in its robotics unit. Pepper is instantly recognizable by his white body, with a head, two arms, a torso, lower body on wheels and a screen. He is about the size of a small child and has two circular black eyes and a small smile on his face.
Launched in 2014, the machine was designed to perform all kinds of tasks, such as greeting customers or displaying useful information such as menus or locations. But it wasn’t popular, and SoftBank struggled to move the 27,000 units it made. Now it has decided to stop manufacturing them altogether, according to Reuters, and hundreds of jobs in France, the United States and the United Kingdom have been cut.
Experiences of deploying the robot in supermarkets and offices have not always gone well. In 2018, a Scottish supermarket fired Pepper after panicking shoppers and often telling them to look “in the alcohol section” for unrelated items, it was reported.
The World Health Organization Ethics Guideline on AI
WHO released a 165-page report this week outlining an ethical and governance framework for AI in health.
It is based on six overarching principles that the organization hopes “can ensure that the governance of artificial intelligence for health maximizes the promise of technology and holds all stakeholders – public and private sectors – accountable and responsive.” to the healthcare workers who will rely on these latest technologies and the communities and individuals whose health will be affected by its use. â
These six principles are:
- Protect autonomy: Machines can automate tasks and generate results, but humans still have to remain in charge of systems and oversee all medical decisions.
- Promote human safety and well-being and the safety and public interest: Make sure that the effects of computer algorithms are studied and regulated to ensure that they do not harm people.
- Ensure transparency, explainability and intelligibility: The technology must be understandable to everyone who uses or is affected by it, whether they are developers, healthcare professionals or patients.
- Promote responsibility and accountability: Understand the limits of AI technology and where it can go wrong. Make sure someone can be held responsible if this is the case.
- Guarantee equity and inclusiveness: AI should not be biased or underperform based on age, gender, gender, income, race, ethnicity, sexual orientation, etc.
- Promote responsive and sustainable tools: Machine learning software should be designed to be as computational as possible.
Facebook Improves Research Dataset to Help Developers Build Home Bots
Chores are mundane and no one in their right mind really likes to do the dishes or the laundry. Humans are going to have to keep doing them unfortunately until AI robots get nimble and intelligent enough to take over.
Simple tasks like picking up cups, putting them in the dishwasher or in cupboards, may be easy for us, but they are incredibly hard on machines. Roboticists may dream of building the perfect algorithm or neural network, but without any training data it won’t be good.
That’s why Facebook released AI Habitat, a dataset containing several indoor image modeling simulations to help developers in 2019. Now it has upgraded it to Habitat 2.0, which contains 111 unique 3D room layouts containing 92 objects, such as drawers, rugs, sofas, plants, fruits, etc.
Future AI agents can be trained to perform a specific task in simulation in order to gain enough experience before being tested in real conditions. What’s more interesting is that robots designed to tidy up houses in the United States will likely work differently in other countries, where the style of houses or jobs vary.
âIn the future, Habitat will seek to model living spaces in more places around the world, allowing for more varied training that takes into account furniture arrangements, types of furniture and types of objects specific to the culture and the region, âsaid Dhruv Batra. , a Facebook researcher.
You can read more about the dataset here. Â®