Blog

After generative AI burst onto the enterprise tech scene (powered by a rather large kick from OpenAI), business people who didn’t usually talk about AI really started getting enthused by its potential.  This included senior executives who, armed with this newfound generative AI started allocating new, sizable budgets.

This was great for the AI industry, bringing the discussions that we’ve been having for some time now into the mainstream.  However, the resulting focus has been on generative AI (typically around ChatGPT or similar technologies) almost to the exclusion of anything else.

There are, however, loads of other types of AI.  For want of a better term we can call this traditional AI – that is AI that doesn’t necessarily generate content but performs business tasks, using learning from training data.  And, whilst the conversation and budgets may be focussed on generative AI, these traditional AI approaches may well be the right tools for the job having driven significant outcomes and revenues across industries for many years.

A few years on from the big-gen-AI-bang, some industry observers are pulling back from the astronomical predictions.  A good read for a balanced view on the state of generative AI comes from Goldman Sachs titled “Gen AI: too much spend, too little benefit?”.  The article presents some positive views for the future of generative AI, but also some views that highlight the lack of successful benefits with respect to the high cost of usage (equipment, power requirements, etc).    

And why is this?  Well, it may sound obvious, but generative AI is really good at… well, generating stuff.  This aligns well with use cases that require this – summarization is a use case that’s often highlighted, as is information extractions (aka search), etc.

But… a huge chunk of business requirements doesn’t need content to be generated!  This isn’t a point about data types (tabular, text, vision, etc) – more about the requirements of the task at hand.  If you’re a bank working with financials and customer approvals (likely running on tabular data), perhaps for your need a regression model (a traditional AI model that predicts a number) would be appropriate; if you’re in manufacturing, defense or oil and gas, maybe you have computer vision models that use object detection for security purposes – you don’t need to generate security threats, you need to detect them with traditional AI;  maybe you’re a healthcare or insurance company that needs to flag text documents for review in your claims processes–  you may need a text classification model to classify these documents. I could go on.

The point I’m making here is that generative AI has lots of uses, but it’s part of a portfolio of tools and techniques.

I’d also like to point out that this is not to say that we don’t need language models either.  I’d like to separate language models per se, from generative AI.  Take text use cases for example, whilst in the past we used to train models from scratch (maybe using ngrams and a support vector machine), in recent years we’ve moved to using transfer learning and deep neural networks to produce language models (think BERT, etc).  We’re just not generating content with them; we may be classifying documents with them.  This falls under our umbrella of traditional AI rather than generative AI.  Similar transfer learning techniques exist in the computer vision space (again, with deep neural models). 

Could you use a generative AI model to do this stuff?  Probably.  You could invest in all the infrastructure, allocate the huge budgets, experiment with massive foundational models, start off a project of prompt engineering and work hard over months to bend the generative model to do these tasks.  But why would you?  It’d be using the wrong tool for the job.  Using the portfolio approach you’d be free to select the best tool to maximise business value and ROI. 

What about all the new hardware that’s being released to run AI models?  Is that going to be redundant?  I would argue that it is not.  With a range of AI models (typically dominated by some form of neural network architecture), a range of hardware is required – be that CPUs with excellent matrix and vector acceleration, GPUs (large and small) for heavily lifting, LPUs (language processing units) for fast inference or NPUs (neural processing units) for power efficient on-device computation.  This portfolio of hardware options supports a portfolio of AI techniques.  Again, right tool for the job at hand.

The portfolio approach is one that we take at Chatterbox Labs too.  Even before generative AI became a mainstream thing, enterprise organizations had portfolios of diverse AI models and technologies. The AI models in these portfolios may have unknown business risks associated with them and so we design our AI risk technology to work across all these different types of models – generative AI is just an addition to the portfolio.

So in conclusion, yes, it is great that we’re all talking about AI now and there can be huge business benefit from AI.  It’s time to take stock and recognize that generative AI is just one AI tool in a portfolio of tools that all have their own merits.

Back to Blog