One of many largest hurdles within the subject of synthetic intelligence is stopping it from creating the identical intrinsic faults and biases of its human creators, and in utilizing AI to clear up social points as an alternative of merely automating duties. Now, Google, one of many world’s main organizations creating AI software program, is launching a global competitors to assist spur the event of purposes and analysis which have optimistic impacts on the sphere and society at giant.

The competitors, referred to as the  Google AI Impact Challenge, was announced today at an event called AI for Social Good held at the company’s Sunnyvale, California office. Google is positioning it as a manner to combine nonprofits, universities, and different organizations not inside the company and profit-driven world of Silicon Valley into the future-looking improvement of AI research and applications. The corporate says it should award up to $25 million {dollars} to a variety of grantees to “help transform the best ideas into action.” As a part of the contest, Google will supply cloud sources for the project, and it is opening applications starting today. Accepted grantees will probably be introduced at subsequent 12 months’s Google I/O developer convention.

Image result for Google is hosting a global contest to develop AI that’s beneficial for humanity

Top of mind for Google with this initiative is utilizing AI tosolve problems in areas like environmental science, healthcare, and wildlife conservation. Google says AI is already used to assist pin down the situation of whales by monitoring and figuring out whale sounds, which may then be used to assist defend from environmental and wildlife threats. The company says AI may also be used to predict floods and likewise to establish areas of forest which might be particularly vulnerable to wildfires.

One other huge area for Google is eliminating biases in AI software program that could replicate the blind spots and prejudices of human beings. One notable and recent example was Google admitting in January that it couldn’t find a solution to fix its photo-tagging algorithm from identifying black people in photos as gorillas, initially a product of a largely white and Asian workforce not able to foresee how its image recognition software could make such fundamental mistakes. (Google’s workforce is solely 2.5 p.c black.) Instead of figure out a solution, Google simply removed the ability to search for certain primates on Google Photos. It’s those kinds of problems — the ones Google says it has trouble foreseeing and needs help solving — that the company hopes its contest can try and address.

Image result for Google is hosting a global contest to develop AI that’s beneficial for humanity

The competition, alongside Google’s new AI for Social Good program, follows a public pledge published in early June, in which the company said it would never develop AI weaponry and that its AI research and product development would be guided by a set of ethical principles. As part of those principles, Google said it would not work on AI surveillance projects that violate “internationally accepted norms,” and that its research would follow “widely accepted principles of international law and human rights.” The company also said its AI research would primarily focus on projects that are “socially beneficial.”

In recent months, months, many of technology’s biggest players, Google included, have grappled with the ethics of developing technology and products that may be used by the military, or that could contribute to the development of surveillance states in the US and abroad. Many of these technologies, like facial and image recognition, involve sophisticated uses of AI. Google in particular has found itself embroiled by controversies around its participation with a US Department of Defense drone initiative called Project Maven, and with its secret plans to launch a search and algorithmic news product for the Chinese market.

Image result for Google is hosting a global contest to develop AI that’s beneficial for humanity

After severe internal backlash, external criticism, and employee resignations, Google agreed to pull back from its work with Project Maven following the fulfillment of its contract. Yet Google has said it’s still actively exploring a product for the Chinese market, despite concerns it could be used to surveil Chinese citizens and tie their offline activities to their online behavior. Google has also said it still plans to work with the military, and its controversial Google Duplex service, which uses AI to mimic a human and make calls on a user’s behalf, will begin rolling out on Pixel devices next month.

Jeff Dean, the head of the company’s Google Brain AI division and a senior research fellow, says the AI Impact Challenge is not really a reaction to the company’s more recent controversies around military and surveillance-related work. “This has been in the works for quite some time. We’ve been doing work in the search space that is socially beneficial and not directly related to commercial applications,” he told a group of reporters after the event. “It’s really important for us to show what the potential for AI and machine learning can be, and to lead by example.”