Hi there!
This time around:
Go ahead and crush one of philosophy’s toughest challenges already
Cool things: great podcast episode, TIME100 AI, LLM visualization
Deep Friday preview: prompttools and The Language Game, as probed by LLMs
Drop us a line in the comments if we say something particularly brilliant or foolish. That’s it!
-The Boring Enterprise Nerds
Hot Takes
Just Solve The Hard Problem Of Consciousness, No Biggie
I have a blast writing, tinkering, and thinking about AI in all its forms. The emergence of powerful large language models has only cranked up my enthusiasm. But these new capabilities give me pause; I don’t think any of the current generation are truly sentient, conscious, or generally intelligent…but their abilities point to a future where they develop those characteristics.
If they gained any of those abilities, how would researchers know? Two recent papers point at active efforts to find out. In “Taken out of context: On measuring situational awareness in LLMs”, the authors discuss situational awareness, where a model could understand its existence as a model. Since many LLM capabilities arrive unexpectedly, the authors propose to try to experiment on abilities that would contribute to situational awareness. A kind of “if the kid can crawl, it won’t be long before the kid can walk” test. Interesting tidbit: included in the paper’s fine-tuning is data describing this kind of test, but no examples of pass/fail results…and when this is done, the LLMs pass these researchers’ “out-of-context reasoning” tests.
Another paper making the rounds is “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness”, where the authors survey current science on consciousness and come up with “indicator properties”. The researchers test several of the latest AI systems for them and (thankfully? not-thankfully?) suggest that none of the current leaders are conscious.
Of course I want AI to be not-harmful for humans. But part of me wonders whether along the way we’ll harm beings we didn’t even know were there. PM
The Sikka Experience
In the SAP side of my professional life, Dr. Vishal Sikka’s impact looms large. He’s a former CTO of SAP, who had a huge role in the company-shaking HANA evolution. He left his SAP posting in 2014 to become CEO at Infosys, where he spearheaded their portion of a $1 billion pledge to OpenAI in 2015. That’s consistent with his PhD from Stanford, where he interacted with early AI luminaries Marvin Minsky and John McCarthy.
The above details fall into place when you find out what he’s currently up to: an AI startup called Vianai. They make hila, an LLM tuned for enterprises’ safety and security needs - and an interestingly-named technology called Zero-Hallucination, which I’d love to find out more details on.
When you read the materials on their site and peruse Sikka’s personal appearances of the last few years, a few common pieces of his perspective emerge:
AI must have a human-centered focus, often pointing out that LLMs need to address bias.
For a technology with such a massive potential impact, there is an asymmetry in talent. Only a few tens of thousands of true experts, and perhaps half a million engineers with skills to implement. (Though this is surely skyrocketing by the week.)
Enterprises can play a unique role in the adoption of AI for the world. By focusing as hard as they do on the safety, risk, and privacy issues they put immense pressure on the big tech players to address them. diginomica’s Jon Reed also sees that role: enterprise tech’s “chance for redemption”.
For enterprises to meet that challenge, we need to hear stories of generative AI making a difference for businesses. It may be top of mind for many C-levels - but until the successes are clear and numerous, the narrative remains to be written. PM
Cool Things
AI and Common Sense: Sean Carroll’s Mindcape podcast had Yejin Choi on (she’s part of TIME100 AI, see next COOL THING). An illuminating discussion, as both interlocutors are striving for clarity and effective communication. Note also her paper on humor understanding “Do Androids Laugh at Electric Sheep?” Sean’s podcast is great for general-purpose curiosity. PM
TIME100 AI: It’s about TIME someone did a list like that (ouch). Great list of get-to-know-these people. A couple of my faves: Anthropic CEO and President Dario and Daniela Amodei, OpenAI Chief AGI Scientist Shane Legg, and extraordinary short-story crafter Ted Chiang. Chiang is a favorite - he wrote an illuminating “blurry jpeg” piece, and I found his early 90’s short story “Understand” to be a fascinating picture of a superintelligent, enhanced human - is that anything like what it’d be like to be a superintelligent AGI? PM
Visualize token predictions: Right in your browser! Gain intuition on models predicting likely next words! Marvel at the app actually storing and running a local version of GPT-2! Share sweet links with your friends as you try to beat each other’s perplexity scores! PM
In The Deep Fryer
Our first Deep Friday lands this Friday (Sept. 15), wherein Paul monkeys around with prompttools. He’s on the lookout for simple ways to experiment with LLMs, and might just use them to play with some ideas from The Language Game. Further down the Deep Fryer reading list: The Coming Wave by DeepMind co-founder Mustafa Suleyman, Stephen Wolfram’s exceedingly helpful ChatGPT explainer, and other dork things that we can dork about.
New issue of the Boring Enterprise Nerdletter drops this week.
We did a Nerdletter Talk on it. Like/subscribe/share! Or don’t. It’s just a YouTube video.
Please consider supporting this Nerdletter by buying us a cup or two of coffee. Thank you for your continued readership and support!
I wish Vishal spent more time talking about how to accomplish his goals. Via.ai and my company (AI Redefined) share much of the same marketing message. We pioneered RLHF about five years ago and offer an open source platform that's been downloaded 40K times.I've never really figured out what Vian/.ai actually does (beyond our shared marketing spin/story)