woman in black jacket and black pants standing on brown dried leaves during daytime

The term “generative AI” (GenAI) is being abused to describe non-generative AI in the current market hype surrounding the technology, even though the lines are admittedly a bit blurred in some cases.

I attended a recent conference where a vendor was expounding on their generative AI capabilities, even going so far as to repeatedly refer to their simulations as “deep fakes.” When pinned down, they admitted they are using PCA to feature engineer their dataset, using an ensemble of different ML models (including non-DL approaches), and then running regression models to provide a purely statistical simulation with probability spreads, only accurate to a few time steps out.

No deep fake. Not even a digital twin. Just classification and regression models. The only thing they were generating in large quantity was smoke and mirrors to impress investors and potential customers alike.

Why would they do that?

In the current AI environment, when someone uses the term GenAI the assumption is the solution leverages a Transformer or Diffusion model trained on a large amount of unlabeled data, learning to transform inputs in one format into outputs in (potentially) another format via latent semantics. Most often some visual, audible and/or textual output is expected, especially in the case of “deep fakes.” When a company touts they’re using GenAI, this is what the expectation is.

The public subscribes, businesses desperate to hop on the AI bandwagon purchase, and investors make it rain. Only asking the hard questions will reveal what is really going on.

That being said, the term GenAI is a bit of a slippery slope. At the end of the day, Diffusion and Transformer models are strictly speaking transformation models – they take an input, generate an output. Granted that output is complex, embeds semantics of the inputs, and is often comprehensible by humans.

Classification models also take inputs and “generate” one or more outputs. So do regression models. Neither requires deep learning. By the way, that associative memory from the 1960s called, and it wants to also be retroactively classed as a GenAI solution.

Simulations, while not being “deep fakes” or digital twins, could technically also be referred to as GenAI as long as they are using true AI models under the hood. Again, in the purest sense of the word “generative” they are taking inputs and producing outputs, often feeding back into themselves with (often wavy-hands-over) increasing propagated errors which eventually make the simulation worthless past a number of steps.

Engineering simulations like FEM/A and CFD are less error-prone, don’t use AI #BecausePhysics, but could still technically be considered “generative” systems.

Conclusion

The call to action for business customers, researchers, and the public in general is to be skeptical and ask the hard questions when someone claims to be using GenAI in their solution. Is it what you’re expecting in the current climate, producing something that is documented to come from a Transformer of Diffusion model? Or is it an older, more common approach wrapping itself in GenAI’s latest hype? Or even – and this is sad but true – not even real AI at all?

Rest assured that Zaggy AI doesn’t produce glamorous GenAI solutions. We might leverage the technology to transform our AI outputs into downstream consumables when necessary, and will be very clear when and how we do that.

At the end of the day, we’re not in the business of smoke and mirrors; we’re in the business of solving real problems with real AI.

Similar Posts