- Leadership, Tech and Career Conversations Newsletter
- Posts
- 6 Tips for Detecting Deepfakes and Mitigating against AI-Powered Disinformation
6 Tips for Detecting Deepfakes and Mitigating against AI-Powered Disinformation
In a year where 2 billion people will be voting, it's never been more more important to mitigate against disinformation

Lead. Learn. Share. Soar.
According to the World Economic Forum an estimated 2 billion people will have cast their vote in 50 different countries by the end of this year. 2024 will have an incredible impact on the social, political and economic fabric of our world. As such it’s more important than ever to understand the world of deepfakes, arm ourselves against the dangers of targeted manipulation and mitigate against the risk of being taken in by disinformation campaigns. This is something we need to do for ourselves. It is also something we need to do for those around us by supporting them in learning, and taking, the practical steps we’ll share in this article.
Here’s a quote from an icon that I deeply respect:
One isn't necessarily born with courage, but one is born with potential. Without courage, we cannot practice any other virtue with consistency. We can't be kind, true, merciful, generous, or honest.
In this newsletter:
6 Tips for Detecting Deepfakes and Mitigating against AI-Powered Disinformation
Upcoming Leadership Conversation with Jesmane Boggenpoel, Managing Partner at AIH Capital
Tech Corner - Why The Introduction of Apple Intelligence has Ruffled Feathers
Healthy at Work - Self-care in an Environment where Self-exploitation is Celebrated
Entrepreneurs’ Corner - We Share Opportunities for Learning more about Venture Capital and Private Equity Funding.
6 Tips for Detecting Deepfakes and Mitigating against AI-Powered Disinformation
Over the past few years we’ve heard a lot about alternative facts. It’s been a while since we lived in a world where we drew our news from the same 5 newspapers and 4 TV channels (2 of which were only available for 12 hours a day and one was a premium subscriber only channel). We now live in a world where entertainment channels can call themselves news channels and get away with it. News channels are plentiful, run 24/7 and some have an openly declared agenda. Some "news sources" don't consider themselves accountable to neither industry bodies nor the public.
Our world is divided. We can’t agree on what a reliable source of information to support our views and standpoint is so we regularly end up in a stand-off, distrusting each other's sources. Ours is a world where we sometimes mock what some would reasonably refer to as credible scientific sources and fact checkers. Academics and scientists are considered part of a great number of large conspiracies relating to some of the important topics of our time and every erroneous study or influenced scientist is waved around as evidence that the "whole lot can't be trusted". Our current context is one where harmful deepfakes can thrive.
What’s a deep fake?
Deepfakes are artificially generated or amended media that, although not a recording of true events - as is the case with filmed video or live recordings, are highly realistic and convincing to the consumer of said media.
When social media launched, we celebrated the democratization of information and especially that of news media. Anyone could now be a journalist and report news live. A decade to a decade and a half later, we’re getting our news from a myriad of sources and channels (including individuals) and we’re being fed news in line with our previous clicks and are solidly anchored in our own bubbles. The more we show interest in certain topics or discussions, the deeper we get pulled into the bubble because, “This might interest you”. That would probably be bad enough, but we’re also regularly bombarded with disinformation and dare I say it, the real „fake news“.
The described information bubble trend has only been made more challenging by the massive improvements seen in the technologies that allow the creation of deepfake videos, images and voice recordings. We already know that every technology is a double edged sword - wielded for good or bad depending whose hands it lands on.
The above information bubble trend has only been made more challenging by massive improvements in the technologies that allow the creation of deepfake videos, images and voice recordings. We already know that every technology is a double edged sword - wielded for good or bad depending whose hands it lands on.
This means that, whether we like it or not, deepfakes are here to stay. So it’s imperative that we’re equipped with tactics for identifying deepfakes and protecting ourselves against disinformation.
Upcoming Leadership Conversation with Jesmane Boggenpoel, Managing Partner at AIH Capital
After spending the month of May focused on a couple of critical deadlines (I will share more in the coming weeks and months), traveling my beloved home country and, particularly, my strikingly beautiful home province of the Eastern Cape of South Africa, I am excited to host the next Inno Yolo Leadership Conversation next week. We’ll be in discussion with a leader possessing a delicate voice that’s paired with a high-impact and effective personality whose owner is committed to improving both world and country. Jesmane Boggenpoel is a Chartered Accountant, a World Economic Forum Young Global Leader, an experienced Investor, Board Member and Managing Partner at AIH Capital. We’ll be talking about the funding landscape with a specific focus on Private Equity. What excites me most about Jesmane is her quiet dedication and discipline (and consequent effectiveness) when it comes to enabling economic and social transformation as well as her commitment to doing the right things and doing them right.

Join us as we speak to the person behind the leader and gain insights on the role that private equity plays in the funding landscape as well as explore how private equity can help scale technology ventures. Visit our upcoming events page to register.
Tech Corner - Why The Announcement Around Apple Intelligence has Ruffled Feathers
On 10 June 2024, Apple announced that it is bringing Apple Intelligence directly to our apple devices. Apple Intelligence is tipped to leverage generative models to bring a smart, context aware personal assistant to every apple device. Now, you might be asking at this point - isn’t that a description of Siri? If you use Siri however, you’ll know that the difference between Siri and Apple Intelligence is probably the difference between a letter of intent in the business world and an actual binding contract (maybe that’s an expression of hope on my part on the latter). Siri has trailed behind compared to other voice assistants like Alexa and the Google Assistant. Part of the reason Siri has fallen behind might actually be the exact reason why the announcement of Apple Intelligence has ruffled some feathers.
Apple has in the past stood as a stellar example that you can respect users’ privacy and be one of the most successful companies in the world (assessed using a commercial lens) at the same time. Most of Apple’s products are privacy respecting because Apple deploys their machine learning and generative models using a federated learning approach that is combined with trained model sharing to allow for user data to stay under the control of users rather than transfer user data to their servers for on-server processing as is common place with other technology providers. Brrrr…say what now? This means that Apple designs, trains and then deploys machine learning and generative models (read artificial intelligence models) onto your device where they leverage your data locally to generate the AI model outputs you see, e.g. the family moments videos created and presented to you in the gallery app. This happens without Apple moving your data from your device elsewhere for processing, i.e. local processing or on-device processing. Key here is that your data remains on your device and therefore stays private. Apple doesn’t try to take ownership of it or store it for later use. The locally deployed AI models use the outputs of the models to learn and improve. Only the improvement values are sent back to Apple so that said improvement values emanating from various devices can be aggregated and used to improve the applicable AI model centrally. The improved AI model is then redeployed post improvement. This represents a continuous deployment, learning and improvement cycle that allows AI models to learn and improve while user data is protected.
Another reason why Apple has been a beacon of hope around respecting user privacy relates to the many different functions and features focused ensuring that you can choose to enhance your privacy and mitigate against getting tacked by websites throughout the web. This ranges from the usual cookie blocking functionality to Apple’s private relay. Apple users have come to trust that, although Apple may be imperfect, the company strive for a world where users are not the product and privacy is a given and not something we as users have to buy back. This is not the case with many technology giants and, had it not been for legislation like GDPR, the Digital Service Act and now the EU AI Act, we’d probably be ill-informed about the number of vendors our data and behavioral patterns are sent to when we decide to use a website and accept their cookies.
Back to Apple Intelligence. What’s got people irate is that a company that has built this level of trust and credibility around privacy is now joining the AI rush, which involves using people’s work and data without their express permission and without engaging the topic of compensation. Now, Apple has taken pains to highlight that they aim to respect data privacy regulations and that they’re still aligning to the principles they’re known for. However, it’s worth highlighting that:
There’s now, and there likely will be in the future, a stronger emphasis on off-device processing since large generative models use a lot of processing power and energy to generate their outputs thereby making on-device processing challenging;
To make room for running their generative models on the cloud, Apple has had to provide reassurance that Apple private cloud compute now extends the trusted security and privacy associated with their on-device processing to the cloud;
They’ve had to specifically highlight measures that focus on ensuring that data that gets processed on their servers is “never exposed nor retained”. This clearly shows that they know they’re walking a tight rope and that there are real risks in undertaking this strategy;
Adding to that they have committed to open up the code on their servers for inspection by independent experts so their users can be assured that nothing untoward is happening with their data and that they live up to their privacy and security promises;
However, most controversial is the affirmation that Apple recently joined and will continue to be part of the tech movement to scrape data from the web without addressing the controversies around the ethics of using other people’s data without their express permission and without providing compensation for said use. It’s mentioned that web publishes can opt out…but until then? We all know that information takes a while to propagate so the scraping will continue while many lack awareness and this offer is probably not meant to apply retrospectively. Additionally, the principles of opt-out vs. opt-in are worthy of discussion;
The other high controversy point has been the introduction of ChatGPT across Apple platforms. Whilst some may look positively upon this, there is some controversy around the OpenAI project which began as a non-profit project and has leveraged other people’s effort and IP without compensation now turning for profit - still without compensation. Additionally, it’s known that ChatGPT hallucinates and there have been examples where generative models have been gamed by those wanting to make a point. The level of corporate citizenship and responsibility around deploying ChatGPT without the corresponding user education is falls below what we should be able to expect from Apple. There’s also enough there to ask the question of whether there is enough of an alignment in values and principles between Apple and OpenAI for them to work together without tarnishing Apple’s reputation for privacy and data protection.
Alas, at the end of the day, it is done. The horse has bolted. However, this is definitely something to watch to see if Apple will be able to scale its artificial intelligence offerings without being seduced by the commercial gains experienced by competitors it once stood on the bastion of privacy to compete against.
Healthy at Work - Self-care in an Environment where Self-exploitation is Celebrated
Corporate environments are increasingly target driven and have always been highly competitive. There are only a few spots at the top and more than one ambitious employee for each. This combined with a culture that incentivises an unending work cycle, leads to voluntary self-exploitation. Not so long ago, I was exposed to a corporate environment where people wore their self-exploitation like a badge of honour. No wonder, since said self-exploitation was constantly praised and rewarded with recognition and career advancement.
People regaled each other with stories of working while in hospital or sneaking away from meaningful family events while on holiday, on leave or during weekends to tackle this, that or the next thing (which, when closely examined, did not constitute a real emergency). I sensed a definite undercurrent of self-protection arising as a result of an unrelenting overtone of people being reminded of their replaceability. People didn’t seem to see the option of leaving their work in the hands of others while in hospital or attending their sibling’s wedding as viable (and it had nothing to do with the competence of the available caretakers).
While anyone who has worked with me will know that I believe that there are times in our work life where deadlines simply have to be met and we might have to work weekends or pull overnights during a demanding period, they’ll also tell you that I am great believer in downtime, giving people their worked overtime back and that I don’t believe that companies should claim a right to our private time or lives.
However, this place seemed to constantly remind one that being employed there meant that they could own you and your time and that they always come before your health and your family. Interestingly, environments like this aren’t more productive, they don’t deliver better commercial results and they don’t out-perform their counterparts on the stock exchange. The value they generate for shareholders is not greater than the value that companies with better workplace cultures generate.
Self-exploitation has reportedly increased along with the increase in flexible work options. There is an increase in political discourse around enacting regulations to prevent self-exploitation. However, this is very difficult to achieve since people who do self-exploit are usually caught up in a work culture that normalises and rewards self-exploitation and those that are aware that they find themselves in the cycle of exploitation and self-exploitation may not believe that they have alternatives (irrespective of the reality that good people are always sought elsewhere).
So, what should you do if you’re caught up in a work cycle of exploitation and / or self exploitation. First of all, there nothing that you can do without courage. Maya Angelo has said that courage is like a muscle that needs to be trained. Start with something small. Start with blocking out times for taking breaks in your calendar and don’t give those slots up to meetings easily. Use your breaks to eat your meals away from your screen, meditate, make human connections, take a walk outside - maybe find a green space to walk to while taking a call.
There is evidence showing that spending time in green spaces is essential to both our health and our productivity. If you are in meetings all day or working 12 - 18 hour days, then you’re doing something wrong, trust me I know. In the long term, just as it is with companies that nurture exploitative cultures don’t perform better, humans that don’t invest in self-care and, in fact, self-exploit don’t perform better either. Start with something small - carve time out for self-care and have the courage to say no to giving that time over to other priorities. Then start meditating on the idea that the value you can generate and contribute does not depend on your employer - you have a lot to offer the world and you can also do so from a place that has a greater appreciation of your humanity.
Look out for more tips on mitigating against self-exploitation in the next long-form newsletter.
Entrepreneurs’ Corner - Opportunities for Learning More About Venture Capital and Private Equity Funding
When we invite people to our Leadership Conversations, we ask them share the topics that are of interest to them with us. A notable number of entries relate to small businesses and entrepreneurs looking to understand the funding landscape to facilitate gaining access to growth funding. As we ask for the topics of interest with the intent to identify content and events that that would be of value to our readers, we’re excited to bring two initial opportunities for understanding the funding landscape better:
The first as mentioned above is the Leadership Conversation with investor Jesmane Boggenpoel. The focus will be on private equity. Register.
The second is a fantastic opportunity to learn more about venture capital by taking the Venture Capital Foundations course from Newton. The benefit here is that of better understanding how venture capital works so as to better navigate your businesses’ own funding journey. This might also trigger a desire in you to become a venture capitalist and undertake the learning journey. You can register and take advantage of this self-paced 12-week online course at no cost here.
We’re looking forward to sharing more opportunities around the funding landscape in future.
Whether it is confidence, courage or your own sense of power you’re looking to build, nothing beats taking small practical, daily steps. The micro-decisions and micro-actions we take add up to a lived culture. Every small decision builds us up or deconstructs our progress en route to our better selves.
So, what will you do today to empower yourself and build towards a more confident, courageous you?
Avela Gronemeyer