People with the skills to build things such systems have reaped great benefits—they’ve become the most prized of tech workers.

The 99-page document unspools an unpleasant and sometimes lurid laundry list of malicious uses of artificial-intelligence technology.

Example scenarios given include cleaning robots being repurposed to assassinate politicians, or criminals launching automated and highly personalized phishing campaigns.

The report says people and companies working on AI need to think about building safeguards against criminals or attackers into their technology—and even to withhold certain ideas or tools from public release.

The discussion has been triggered in part by government use of algorithms to make decisions that affect citizens, such as criminal defendants, and incidents where machine-learning systems display biases.

Microsoft and IBM recently had to reeducate facial-analysis services they sell to businesses, because they were significantly less accurate at identifying the gender of people with darker skin.

The text above is a summary, you can read full article here.