AI & ETHICS: How can we benefit from AI without Undermining our Authenticity, Values & Credentials as an Ethical Business?
- Susan Lawson Thought Leadership

- Aug 7
- 5 min read
You can always tell when companies are aware they are doing something unethical or controversial: they begin to invent or deploy unusual terms and phrases designed to soften the truth about what they're saying.
For example, whilst the word ‘transition’ when it comes to climate change is, as I've said elsewhere, actually in fact accurate - we are in a transition from a carbon based to a carbon free economy – we can't deny that the use of the word has also become prevalent in part as a sneaky way to defer the need for change indefinitely.
Similarly, I was both saddened and amused yesterday to read a report listing various companies, ranging from Duolingo through to Cisco, who are currently letting go huge numbers of employees in favour of AI. The report delicately referred to this as “adjusting” their workforces.
To be clear I do not have a straightforwardly negative view of AI and in fact I do increasingly see many positive use cases. But I still have reservations and concerns and there are problematic facts that it is simply disingenuous to ignore.
On the most pragmatic level, there are still too many inaccuracies produced by widely available generative AI platforms to truly be time-efficient—at least for certain purposes. For example, I can see the use of AI in short-form content generation (with human editorial oversight) because accuracy is far less important here than in long-form deep-dive Thought Leadership materials where it’s critical to get academic references right and to be fully on top of the details of a subject. The number of errors and hallucinations that widely available AI still produces means that rigorous checks still need to take place and in my experience this takes just as long, if not longer, than it would have done to simply do the work 'by hand'.
But more important are the ethical factors: for example, are we not simply lying to ourselves if we believe that businesses (including our own) will be using AI for any purpose other than to save money by laying off staff—or by failing to hire the freelancers or external support that we would have done had we not had access to AI?
Of course, there may be a strong business case for this. But how then do we reconcile that with pronouncements about how we are contributing to Social Value? The UN Sustainable Development Goals (SDGs) cover not only environmental issues but also human happiness, welfare, and wellbeing issues which are in large part tied to the ability to make an honest living. Even here in the West, far too many people are already out of work against their will or being unethically underpaid as it is.
Yet for the last few years, whenever this point has been raised, it has been swept aside with vague platitudes about how new and different jobs will be created; vague suggestions that people 'just need to retrain', or utopian pronouncements about how marvellous life will be when ‘we don’t need to work’. The latter in particular ignores the fact that work, no matter how unglamorous, is known to be beneficial to the sense of accomplishment (so long as reasonably paid and in good conditions, of course); it also rather naively assumes that the productivity gains are somehow going to be passed on to the ‘the people at large’ – there is no current evidence that this will be the case.
Of course, in place of lay-offs, we might hire the same number of people while creating massively more productivity. And it’s true that some industries and companies have intentionally devised ways to deploy AI without causing mass redundancies. The driverless truck company Aurora Innovation for example seems to have been mindful of research showing that this would only happen in the fastest-case scenario of rushing the transition. In a slower or medium-length transition case, this could be undertaken at roughly the same pace as the natural turnover of truck drivers, whilst also addressing the problem of an ageing workforce in this sector and contributing to both economic and safety improvements.
Similarly, I’ve seen research into the use of AI where the cost efficiencies to be created were intended, at least in part, to be actively used for positive gain such as improved salaries for the workers who remain. However, this is likely to be a very rare viewpoint. In addition my interest is in ethical business, which is not the same as charity, nor is it the same as social enterprise. So I’m not going to suggest that all savings or efficiencies should be ‘redistributed’ rather than help boost bottom lines because I see zero chance of this in fact happening and I prefer to deal in possible realities.
Yet we still have to consider whether embracing AI (if we choose to do so, and most eventually will) balances good and bad social consequences, especially for those we currently hire or might have hired. Even if you do not truly care about this (and I hope, if you’re calling yourself an ethical company, that you do), mass layoffs simply do not square with Social Value credentials. It’s a dreadful optic for a company claiming an interest in positively contributing to society and I do not believe this is an issue that can be ignored or glossed over, either ethically or from a Brand Value positioning perspective.
If your bottom line is higher but you choose to hire far fewer people, you are therefore simultaneously making productivity gains and higher revenue and therefore contributing higher taxes (good for the economy) whilst in other ways undermining the economy (increased unemployment is both economically and socially a bad outcome). Too, claims that small business should be let off the hook for sub-par wages because ‘they create jobs’ will start to ring hollow indeed.
This will undoubtedly sound naïve to those who still believe that ‘the business of business is business’ but from a post-Friedmannian ethical business perspective we simply cannot uncritically deploy AI if there isn’t at least some benefit for the smaller number of people that we do hire – I would go so far to say that we should also be considering how to mitigate the negative impacts more broadly.
For example, will your use of AI really be freeing up your smaller workforce to ‘do more creative thinking’ (or on the contrary will they be reduced to AI fact-checkers, a role that few would find fulfilling)? Will they see better perks—whether directly financial or indirectly financial, such as trialling shorter working days (let’s face it, most people can’t fully function beyond six hours as it is)? Or other perks that you should be more than able to afford if you are saving on hiring costs and National Insurance contributions?
You may also want to look at other ways that you can help mitigate the problems in society that are going to be created by the ever-growing mainstreaming of AI. Do we really see a blissful utopia of endless workfree lazy days in the sun for all but the most ambitious, fuelled by magical income streams whose source nobody is willing to pinpoint? Call me cynical but this sounds like a mirage. In addition, as noted, work is for many (if not most) essential to wellbeing. Even within this fantastical vision, there are only so many leisure hours that can be filled before utopia slips over into negative social outcomes.
I certainly don't have a wholly negative stance on AI - and I'll perhaps produce another Post from a more positive perspective later. But we cannot keep sweeping these issues under the rug, especially as announcements about layoffs begin to increase. Self-proclaimed ethical companies are in no position to mindlessly jump aboard an apparent AI gravy train without serious consideration of the ethical issues raised.
Interested in Ethical Business? You may want to visit here.
