JUNE 19 — Twitter (still not X to me) remains the best real time monitor of reactions and Apple's Worldwide Developers Conference (WWDC) was no different.
Of course, there were plenty of leaks and some people had already created Bingo boards in advance for what would (likely) be announced or appearances of WWDC tropes such as senior vice-president of software engineering Craig Federighi's fabulous hair.
The big machine learning elephant in the room, AI, was left right at the end and like many other people I think it might not have been the best decision.
Leaving the unveil near the close of the presentation instead of making it the central point was a lost opportunity to help foster a better understanding and differentiate Apple's AI offerings from its competitors.
Machine learning has been at the core of most of Apple's recent innovations and at the heart of it, machine learning is AI.
More correctly it is a subset of AI — all machine learning is AI, but not all AI is machine learning.
I studied artificial intelligence in university as well as learned to code in Prolog, a logical and declarative programming language, so it wasn't a surprise to me when Apple started using the term "machine learning" less, swapping it for the broader umbrella term of AI.
AI by itself is not something to be feared. After all, fuzzy logic that was big in the 80s was also a form of AI and it was embedded into everything from washing machines to rice cookers.
The hot button topic of the moment is generative AI, or AI that creates or "generates" data from generative models in response to prompts.
There are ethical questions to be answered about this tech -- questions about how sustainable it can be when it uses electricity and resources such as water in even more staggering amounts than cryptocurrency did and the use of copyrighted material without compensation nor attribution.
On Facebook I see open hostility towards generative AI, particularly when it comes to artwork.
Creatives are currently cancelling or threatening to cancel their Adobe subscriptions in response to a change to the TOS that implies any of their work uploaded to the Cloud could be used by Adobe without permission or compensation.
There doesn't seem to be as much instantaneous backlash towards Apple's interpretation of baking in AI into its processes, where instead of relying on the Cloud, Apple instead is going to handle most of the processing on-chip, on-device, and their own servers, adding a layer of privacy and security.
As to the partnership with OpenAI, using it will only be more of a last resort for when processing needs exceed what an Apple device can manage, which will be more likely a rarer occurrence.
It seems to be a more acceptable compromise for those who do not trust the lofty visions of the likes of OpenAI's Sam Altman, whose views range from moderately ridiculous to absolutely preposterous.
Apparently, no money will be exchanged in the partnership where instead Apple will offer distribution/access to OpenAI via its devices and should customers decide to pay for additional subscriptions to OpenAI, that will be their decision.
Siri being less useless would certainly be a boon as unlike my previous Google smart speaker, Apple's avatar of sorts responds only to a very few select commands and is less clever than my tabby cat.
What is still a question mark is what Apple trained its own AI tech on and whether the company might become a target of irate artists who as of now are suing companies such as Midjourney, StabilityAI and DeviantArt for their use of allegedly copyrighted art.
In most ways, however, Apple's approach to AI might be the safest and least environmentally concerning.
After all, Apple's strategy of not rushing to be first, but pulling up later to prove to be the best or better implementation of a new tech feature has borne out in most cases.
Learning from others' mistakes means that it will (probably) be far less likely for Apple's AI to turn up child porn or straight up plagiarised copy or images.
I still find the notion of creating AI images of my friends in messaging apps distasteful; instead I will reserve that for my enemies.
In the meantime, my Microsoft Edge browser keeps popping up "offering" to help me rewrite the wire pieces I'm prepping for upload, which is ridiculous and shows that it is not at all intelligent because why would I need its help to rewrite Reuters?
Right now I would at most let AI spell check any writing I do because if some day a machine could be as snarky as I am most days, I will take it as a sign to retire to a mountain somewhere to herd goats and die from exposure.
I only think AI can make your writing better if you aren't particularly good in the first place or are very bad at judging what is good writing — I would define good writing as being engaging, individual and going beyond just communicating your objective.
We'll have to wait to see how Apple Intelligence (ridiculously clever moniker even if it did make many of us groan in response) does in real world use.
Having been signed up for the Apple developer betas for literal years I look forward to getting a better understanding of what was announced by actually prodding things to see how they work.
Next week, I'll talk about iPadOS and if it's worth getting excited for the next update.
* This is the personal opinion of the columnist.
You May Also Like