GenAI in Content Marketing for 2024
GenerativeAI is becoming the key to unlocking unprecedented volume, velocity, and value for brands seeking to increase and improve their content performance.
But, let's not just talk about custom GPT's and Firefly here - How will GenAI influence visual content marketing workflows, pipelines, & processes in 2024?
With the exponential growth of AI comes the expectation for 40% increase in productivity by 2035 - and how marketing departments, agencies, and production studios can understand where to 'jump into' operationalAI usage?
//TLDR
- Less about LLM's in 2024. More focus on Multimodality (GWM's & LAM's & VCM's) agent led systems.
- Less reliance on tokened subscription platforms.
- Introducing the private GPU Infrastructure.
- Welcome onboard: Chief AI Officer, Creative Technologists & AI Creative Directors.
- Courts finally ruling on AI Copyright claims.
- AI will used across ATL campaigns.
- VideoAI will be employed commercially
- Categorisation and upcycling visual content with AI architecture.
The term “LLM” will become less common.
In AI circles these days, the term “large language model” (LLM) is frequently used to describe any advanced AI model tool.
This made sense for the past few years, as most AI tools were focused on text-only models, across multiple industries and actions. That's been similar in the SEO world too. Language and keywords are representable at scale, so how do we leverage this approach to visuals?
But as AI expectations become increasingly multifunctional, this term is less relevant. In 2024 modern models will be multimodal engines - trained on text, images, video, 3D, photogrammetry, audio, music, physical parameters and more - and precisely tuned on variables that serve to enhance these specific industries. They're far more than just language models, they are General Model Worlds (GMW). And that's what we need for effective commercial use cases. We need to be able to fine-tune our models using a multitude of input variables (colour, sound, vision, text, even commonality of context) and expect these consistent relationships to follow through to the outputs.
Less reliance on token subscription platforms.
I'm guessing if you're reading this you've played with some of these AI tools....MidJourney, ChatGPT, Claude, DALL-E, Krea, Firefly, AdCreative, Runway, Pika, the list goes on and on and on....
But. Can we really integrate these into a commercial content pipeline and use it for long-term scalable and automated intent?
Well no...
Most of these services run on API's and are subject to token input and output operation. Therefore, ethical, privacy, customisation, and scalability rules are difficult to predict and define.
Many Generative AI companies are competing with each other, offering similar or competing services, and funding will become difficult to raise. So what's that mean for the consumer?
Price rises, adaptive terms, and loss of service.
There will be a race-to-the-bottom in pricing between some of these companies, most trading losses (at scale) using existing hardware infrastructures (evidenced in per-token input/output costs) and hoping to make it up on volume.
Future investments is going towards developing more efficient models, leveraging new AI compute hardware, Agentic AI systems, and teams providing value-added services, like industry-specific model fine-tuning, process definition, and ethical compliance.
Privatised GPU Infrastructure
Most don't run local or private AI architecture. Therefore, we're not paying Nvidia direct for GPU's. We're used to going through mediators such as Microsoft, Google, AWS, Meta, or Open.ai.
2024 will be the catalyst for owned, homegrown AI chipsets capable of running models and AI tools, which will compete directly with Nvidia’s GPU's. Nvidia will also enter this cloud space, making AI computation more like rendering was 15 years ago. Apple are already exploring local AI management on iPhones.
We'll see some brands adopting owned AI servers, cloud architecture, and more scalable tokenisation. AI Accountability will be on the rise, we'll see the experimentation and exploration phases of R&D and testing move to private and secure domains, allowing brands to iterate frameworks, models, pipelines, and processes internally - deploying to enterprise in phased sprints.
We're gonna need
a bigger team...
Welcome to the Chief AI Officers, Creative Technologists & AI Creative Directors.
Artificial intelligence has shot to the top of the agenda list in most boardrooms this year, with teams scrambling to figure out; when to jump onboard, who to trust, and how to integrate this powerful new opportunity.
One trend we expect to see is appointing a “Chief AI Officer” to spearhead these initiatives.
We saw a similar trend play out over the past few years in integrating and reinforcing a sustainable operating activity.
The integration of OperationalAI will filter through multiple facets of the business, if you're pulling up the 'process carpet' and installing something new, you may as well paint the walls at the same time right?
In agency world, we're going to see leaders in creative and client roles (creatives, AD's, technologists, producers, and strategy / solutions teams) needing to understand and embrace these opportunities for clients looking to exploit the immediacy of the opportunities at hand.
We'll see some resolutions on
GenAI copyright and IP ownership.
Almost every single talk we've did in 2023 (about GenAI) has featured the controversial topic of legal & ethical risk - when applying this technology commercially.
The world’s leading generative AI models have been trained on copyrighted content, this is a fact.
We've evolved these conversations in recent months to focus attention and the legal considerations, however the fact remains... As models, frameworks, and infrastructure matures these foundations are going to be harder to prove.
Whether it's functional content from GPT-4, visuals straight out of MidJourney or SDXL, or videos from Runway, most of these AI companies have pulled this data from LLM's trained on the wealth of the content on the internet.
We have been waiting for the rulings of many of these initial cases for years! But...one thing is for sure - the cat is truly out of the bag, and however we use modular AI in content generation is pre-written. These will be landmark cases, however where we have been growing over the past 2 years isn't in the LLM, its in the framework, the modular processes, and the hybrid workflows. This is the true innovation in content generation. The models will be interchangeable, and we will place the framework atop of the most ethical, relevant, and future focused models.
2024 will bring some more resolution to the older cases, and accelerate the attention of those who are still standing on the AI platform - waiting to board the train.
We'll see brands, agencies, and production teams lean further into AI systems (not just tools) - utilising AI architecture across more ownable and attributable models.
Content & ATL Campaigns heavily augmented with AI.
We're used to seeing AI advocates talking about how AI will make things more personal, more efficient, and more cost effective. Well, what about more interesting? More consistent? More creative? More engaging?
In 2024, we will see BTL, eCommerce, and SEO content influenced by an AI framework (from ideation all the way to deployment and optimisation), but we'll also see agencies' using AI to influence and generate ATL content. In the last 2 months we have seen huge grow in IP-Adaptors, Noise styling, and Control Nets all designed to enhance, upscale, and hybridise visual media production. We predict we'll be using GenAI in Virtual Production, AI content in creative development, and AI across the gamut of post production content extremely liberally in 2024.
Like pre-visualisation, Broll, animation, editing, colour grading, and audio mastering - we'll start to legitimately refer to an "AI phase" in any project - James Pierechod
We're building private GPT's workflows, LORA's, and Embedding frameworks to allow brands, agencies, and production teams to generate seamless, consistent, branded visual content in a private and secure landscapes.
VideoAI is entering the mainstream
2024 will be the year that VideoAI gets commercially integrated into content production workflows.
We have seen the growth of tokened services such as Runway, Pika, and Meta's and Gemini Pro Vision (VertexAI has GenAI now) entering the market, and even in early 2024 MidJourney announced their Video offering will be here within the next month or so! - but, we're also on the cusp of the owned and private GenAI environments across video.
And the launch of the LCM's LORA's and S-LORA's has dramatically reduced the latency between input prompt and result. This will be the foundation of the motion & VideoAI revolution. We will start to see the end of text prompting in video, replaced by augmented visual inputs.
This hybrid videoAI integration is amplified by the growth of realtime control nets, depth segmentation, and IP adaptors within workflows - allowing more powerful and definable inputs (such as green screen, audio integration, or CGI rendering & previsualisations) - then expand the variability of the results.
AI augmented recycling, repurposing, and reuse of content.
We're not just talking about reusing content here! We talking about recreating, regenerating, and reviving the potential of visual content you already own. On a macro level, we're talking about client retention, and evergreen content flow.
For Example: A coffee looks great with a milk pour (in glorious slow motion). The cups, context, and styles may change - but certain physical attributes of that milk pouring remain the same.
In 2024, we expect to evaluate visual content on this level...We create frameworks to automate this on a macro basis, allowing creatives to influence context and style - while protecting (and automating) the purpose of the scene.
Asset Control Management and Retrieval: When we're doing this, we're cleaning house on old fashioned asset management systems (DAM's). 15 x HDD's and a subscription to an ill maintained cloud asset storage facility is not the future of content accessibility. DAM's with cloud integrated access, curated and managed by AI will enable large content libraries, to automatically handle last mile location, tagging, versioning, and classification simple - making it easier to retrieve, send, and reuse owned content.