p I helped work on a thing last weekend that I can’t write about, yet, and then last week I found my way to San Jose for Nvidia’s GPU Technology Conference, and fine, all right, OK, I’m convinced: Now that the smartphone boom is plateauing, AI/deep learning is the new coal face of technology — and, at least for now, Nvidia bestrides it like many parallel colossi.

It’s the place where advances are being made, where the most value is being created … but it’s also a messy business, often with little visibility, with many ways to go terribly wrong.

The Nvidia GPU conference featured a sizable zone of scientific posters exploring the cutting edge of GPU usage, something you don’t see at a lot of tech conferences.

It turns out some of those smug academic Ph.Ds were onto something after all.

All I have in the hands-on AI/ML/deep-learning/neural-network experience is some time I’ve spent playing around with TensorFlow, a graduate-level neural-network course I took back in the day, and some book research.

“Deep learning” has something of a specific technical meaning, but it seems the least bad option, and it’s what Nvidia CEO Jensen Huang uses, so let’s go with that.

The text above is a summary, you can read full article here.