Tech for progressives

Exploring the strategic use of AI and data by progressives
Comments
Author

Will Stronge

Published

October 30, 2023

“We recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not).”

- Stanford University researchers1

Artificial intelligence (AI) and data-led methods have the potential to be extremely powerful tools for equality, environmental sustainability, social justice and beyond, but - to state the obvious - they are not inherently progressive. That does not mean that progressive actors should leave the field. In fact, I advocate the opposite conclusion: progressives should tool up, and use established AI techniques - as well as more emergent capabilities - to aid their efforts. It is not prohibitively expensive, the skills are out there and the appetite exists amongst those who have accrued them; at Autonomy we know this firsthand.

Conservatives are already using AI in their campaigns by employing algorithms to target voters with personalised messages based on their beliefs and opinions. They have been using AI for years in order to analyse public opinion on issues such as, in the US, gun control or abortion rights. They use this information in order to better understand which issues matter most to voters and then tailor their messaging accordingly (the classic example now being Cambridge Analytica’s various activities many years ago now). In terms of policy, there are ominous calls from the right to use machine learning algorithms to dubiously predict who will commit crime and who should be incarcerated; some want to use facial recognition software to profile people at airports or use predictive policing systems, which will tend to target low-income communities of colour. There are also, no doubt, efforts underway by various conservative groups to use machine learning techniques in order to automatically generate fake news stories and beyond: think Chat GPT bots that harass campaigners and public figures online, or LLMs that scour the web to identify new fertile constituencies for radicalisation. In this evolving space of political technology, some are claiming that elections may not be the same again; indeed, they may not have been the same since Obama’s ‘email election’ of 2008. I do not advocate the illegal practices pursued by the likes of Cambridge Analytica (there are surely more effective ways of political organising anyway), but I gesture towards them to highlight the asymmetry of interest in such techniques. Conservatives have been using AI and data-led methods for years — it is time the left did too.

Actively pursuing AI-augmentations - or as Jaron Lanier calls it, AI-augmented ‘social collaborations’ - is far from common sense for socialists and many liberals today, who are often (correctly) deeply sceptical of tech solutionism or fearful of using techniques deployed by the right; thus they show little engagement with the possibilities that AI affords. We share a critical disposition towards Silicon Valley’s hype drivel, and part of our ongoing work is to document the platform economy’s malaise. However, some of the more explicitly anti-tech discourse online sometimes risks blurring the line between Big Tech and digital technology per se: to pursue the latter, and the machine learning techniques (etc.) behind them, can be seen in these circles as boosting and validating the former - or at least going down the same path. If we followed this line of reasoning closely, then two problems jump out immediately:

  1. How do we distinguish which tech is appropriate to use, especially if nearly all of our devices and tools are deeply connected in one way or another to tech corporates and digital platforms? Is Google Docs the limit of ethical use? Is Microsoft Word better because it can be somewhat kept offline? On what grounds, in short, do we not engage with things like large language models (LLMs), if we are already using Big Tech’s products in other, more rudimentary ways?

  2. On what grounds do we consciously disarm ourselves of instruments that our opponents are using and have been using for some time? If we accept as given that these tools are accessible and that their continued integration into established research and intelligence gathering methods, then choosing not to use them should be seen as an explicit decision.

Here we might detect an aversion to dabbling with ‘the master’s tools’, as Audre Lorde’s phrase goes: do we risk doing more damage than good if we use AI-informed methods in our work? If Big Tech firms - who own the hardware and data war chests - are part of all that is wrong with our society and it is they who driving AI adoption, then surely that is sign enough that we should go no further down this path. As Rodrigo Nunes once pointed out to me in conversation, this is at best a misleading dictum: yes, we absolutely can dismantle houses with the tools they were made with. If we look at the alternative means that we might use to accomplish our political ends - often more analogue and more time-consuming tech with less reach - then it seems obvious that we need to keep updating our tools to expand our repertoire of capacities. The proof is always in the pudding with any tool - and the usefulness of machine-learning or more advanced forms of AI for various initiatives for knowledge and intelligence gathering is now becoming well-established.

This is not to say that anyone should think that machine learning, machine vision or other forms of AI or tech augmentations should be the focus of progressive organising, research or narrowly political work; these things are instruments and political and economic change will never do away with the personal, cultural or social elements - nor should we try. Solving economic justice and stopping climate catastrophe are primarily political problems, not tech ones.

We are, however, saying that, as a baseline, progressives should be tooled up with at least some AI-capabilities, and in a best-case scenario it would be progressive forces innovating with regards to AI development and use, prohibiting the private capture of this tech and devising better strategies within political parties, civil society organisations, trade unions and ultimately governments. Technology won’t save the day, but one kind of tech or another plays a part in every initiative that does. Right now we are acting like we can (or should) do without the latest data processing/analysis tools, when in other sections of society - e.g. in marketing, on social media platforms and across academic disciplines such as medicine, sociology and of course computer science - these tools are fast becoming standard procedure. The problem to be overcome can also be seen in terms of social groupings: Currently, and in crude terms, we can see that there are two distinct cohorts:

On the one hand there are ‘tech people’ - working in tech start ups and larger orgs within climate tech, ed(ucation) tech, fintech, health and so on. Politically, the majority of this fairly young cohort tend to be progressive - they care about climate, about the cost of rent, housing and childcare; they are broadly on board with equality issues such as trans rights, feminism and racial justice. They range from having a broadly centrist to leftist perspective - but in general spend very little of their time thinking about politics capital P. This is the social group that Jeremy Gilbert sketches in Twenty First Century Socialism as a potential demographic for allyship. From our experience, this group tend to include highly skilled workers when it comes to data wrangling and information/ communications work, and they have ways of executing projects that are way ahead of progressive political organisations; but the sector(s) as a whole betrays a lack of political nous or strategy by which their efforts might contribute most effectively to change. For example, prevalent is the idea that private sector investment will be dependable and sufficient enough for scaling up solutions to climate change - or indeed that solving climate change will be profitable enough to be more attractive than other investments. This strategic deficit is reflected in the spread of Effective Altruism within the sector, according to which giving a portion of one’s income to charity is perceived as one of (if not the) best mechanisms for making change in the world. We don’t have to re-hash the limitations of charity as an approach to inequality, deprivation and creating better worlds - safe to say that in most cases charity is inferior as a long-term lever to wielding state capacity or changing the very structures of our economies.

On the other side are ‘political people’ - those working paid or unpaid in trying to bring about political, social and/or economic change. Here we mean researchers in political parties, NGOs, campaigns and think tanks; strategists and organisers in campaign groups and trade unions; general activists and grassroots party members. Through being at the political change grindstone on a daily basis, many of this cohort have fairly robust and regularly updated analyses of the political playing field. However, the methods of organising, researching and planning are largely data and tech poor - with most groups using Google search for data gathering, manual scouring of unstructured data and official data sets for their knowledge of the state of the country.

Bringing these two groups together, or at least in communication, is - in my view - a key task for progressive movements.

At Autonomy, we’ve been using forms of AI - specifically machine learning - for a couple of years, in order to refine the datasets that we work with or generate new ones. This can include using natural language processing to sift through tweets within a collected Twitter network in order to detect different political communities, or to streamline occupation information between national databases. We think uses such as this could also enrich many other progressive organisations: think about the potential for campaign groups who are looking to map power networks for example, or trade unions who want to plot transitions between jobs and industries but could do with some better data. We know there are many more uses that we are beginning to explore now - especially via LLMs.

In the climate advocacy and action space, some organisations are already innovating further. The non-profit Project Canopy, for instance, use satellite imagery and machine learning tools to detect logging routes in the Congo Basin so as to track irresponsible deforestation in the area. understanding carbon emissions and logging routes etc. Another obvious example for future use of AI is in developing economic systems beyond the chaos of markets that are robust enough for mitigating catastrophic climate change. Theorists such as Max Grünberg are making original theoretical contributions on harnessing the predictive analytics of forecasting systems currently used in firms such as Amazon towards the ends of a socially useful distribution of goods. Others, such as Aaron Benanav, are also interested in this kind of tech, but want to flesh out where the democratic, decision-making joints could and should lie once we have neutered the markets ‘invisible’ choice infrastructure. As writers such as Holly Jean Buck and Benjamin Bratton have drawn attention to in their respective writings, managing the planet’s balance of resources, carbon stocks and flows and assessing the complex climate impacts of various policies will likely require extensive sensing, forecasting and correcting infrastructure. These deployments of technology are not just fun sci-fi concepts, but will be necessary tools for survival.

There are no doubt a great many other uses for AI and AI-related tools that progressives can capitalise on - and we are just scratching the surface. As open source communities plough on with rapid innovation in this space, we should tool up, help each other understand what can be done and enter the fray with new capacities. After all, we know that conservative forces will be doing the same.

To this end I am excited about the new Autonomy Data Unit (ADU) blog, about Autonomy’s own foray into AI-powered tech and especially the new CADA network of scientists that we are pulling together: by pooling collective R+D capacity, we hope to contribute to the project of a more advanced strategy for system change.

References

[1] Alpaca: A Strong Open-Source Instruction-Following Model. Link: https://crfm.stanford.edu/2023/03/13/alpaca.html