11/04/2023

Start-ups increasingly are engaging with AI – both as a tool to write code and as the ‘engine’ which powers their innovative products and services. However, as we all know now, the gains from AI are not without risks to consumers, principally risks to fairness and privacy. As start-ups are at the ‘point of inception’ of AI, they need to engage with the ethical issues around AI.

This is more than a ‘feel-good’ ESG issue. A recent study by researchers from the Stern School of Business and Boston University has found that while adopting an ethical AI policy does not correspond with increased performance for start-ups, there is evidence that investors reward start-ups that couple their AI policy with more costly preventative pro-ethics actions, like seeking expert guidance, training employees about unconscious bias, rejecting or supplementing inadequate training data and hiring diverse programmers.

The challenge of scale and expertise for start-ups

The researchers acknowledge upfront that start-ups face constraints in investing in an AI ethics framework:

“..start-ups face the same ethical challenges as larger technology firms, and their AI products pose similar potential risks to outcome fairness. Yet, in addition to decoding the ethical norms in their swiftly growing industry and navigating regulatory grey areas, these start-ups must quickly develop their initial product to raise funds and survive”.

The decision to spend time and scarce resources on a thorough-going AI ethics framework may delay product development, which in turn can delay VC funding rounds when cash is already being rationed and getting the product to market in a highly competitive environment.

Also, just because you have the ‘tech wizardary’ required to make a success of a start-up also does not mean that you have the skillset to understand and address the ethical issues of AI. Quite the opposite - as the researchers note, identifying ‘socio-technology’ biases in programming or training data biases is not always an easy task and “[a]lgorithmic bias issues are even difficult for experienced programmers to recognize, and even if they are aware of a problem, they may be unable to diagnose it accurately.” There are some limited regulatory guardrails already in place, especially in the privacy space, but these are still evolving to align more closely with current social and ethical expectations. Moreover, they almost always fall short of requiring businesses to interrogate whether steps should be taken, not just whether they lawfully can be.

So, is it worth start-ups making significant investments in AI ethics and why?

The study

The researchers sent a digital survey to founders, chief technology officers, and executives at 4,593 AI-producing start-ups with under 500 employees, with an 8% response rate. Firms responding to the survey were about five years old and employed a mean of 25 employees, with almost half having ten or fewer employees. The survey was administered worldwide, but nearly 80% of responses were from the United States, Canada and Europe.

The survey asked whether firms had an AI policy and whether and how they had ever enforced it. Of responding firms, 58% have an ethical AI policy, but 40% had never evoked their policies in a way that led to a costly outcome, such as dropping data, firing an employee, or turning down sales. There was ‘heterogeneity’ by industry, with start-ups in healthcare and trade most likely to have an AI policy.

The researchers also recognised that many start-ups have undertaken pro-ethics actions not necessarily connected with the adoption of ethical AI principles, such as considering diversity when selecting training data or hiring a minority or female programmer. The researchers considered that, even in the absence of a formal AI ethics policy or its explicit enforcement, these actions are preventative measures that set up the organization to develop more ethical products: for example, diversity in the life experience of programmers is likely to help in the identification of bias in training data. Almost 80% of start-ups take at least one pro-ethics action.

The researchers paired the survey data with employee-level skills data sourced from LinkedIn profiles, firm-level demographic, funding, and investor data from Crunchbase and Pitchbook, firm-level data on cloud providers from BuiltWith, and data on ESG policies and practices by industry sector (i.e to identify whether firms in some sectors – whether start-up or established – were better at ethical issues).

What they found

The researchers found:

  • start-ups that had an AI ethics policy were more likely to engage in other pro-ethics actions, such as hiring more diverse programmers, collecting more diverse data, or consulting with an ethics expert. This correlation persists even when adding firm-level measures such as region, size, founder MBA, and prior funding and industry sector-wide ESG practices.
  • there is also a positive correlation between firms that consider diversity in training data selection and having a female founder, with the researchers commenting that “females may be more aware of how unrepresentative training data impacts outcome fairness for demographics subgroups.”
  • when a start-up has a relationship with a major, larger technology firm, the start-up is more likely to have an AI ethics policy. While some heterogeneity based on industry persists, other firm-level control variables (region, size, founder MBA, and prior funding) do not significantly affect the coefficient describing the relationship between having a larger technology firm relationship and adopting an ethical AI policy.
  • Where the relationship between the start-up and the larger tech company goes further and extends to data sharing, there is a “positive and strong” correlation with the start-up taking action to use more diverse training data to avoid bias.

The impact of AI ethics on investor attitudes to funding involved more statistical analysis of the extent of pro-ethics actions taken by the start-up (i.e. looking at the ‘index’ based on the number of different types of activities undertaken by the start-up over and above having a policy). The researchers found:

  • no correlation between the ‘bare fact’ of having an AI ethics policy and funding. There was no change when additional firm-level factors were added, including whether the start-up was based in a location with a high proportion of VCs (New York, San Francisco Bay Area, London, and Boston) or a start-up’s founder has an MBA, both which could plausibly be correlated to raising additional funds.
  • the relationship between funding and taking pro-ethics actions grew stronger the greater the number of pro-ethics actions taken by the start-up, with not much of a correlation if the start-up took only one such action but a significant relationship if the start-up took three or more. The researchers concluded that “[t]hese findings support that pro-ethics actions, including hiring minority programmers, searching for more representative data, and engaging with experts, are valued by and visible to investors and relate to greater performance.”
  • the correlation between funding and pro-ethics actions taken by start-ups appeared to be stronger for start-ups dealing with more complex AI systems, such as neural networks. Managing the ethical issues around data for these AI systems is likely to require a relatively bigger investment by the start-up. Their analysis shows that start-ups that have an ethical AI policy and discard data due to following their AI ethics policy have a significant correlation to success in obtaining funding.

Research conclusions

The researchers make the obvious point that forging a relationship with a larger, established tech company provides a discipline, if not a short cut, for start-ups developing and implementing an AI ethics policy:

“In almost all scenarios explored, a data-sharing relationship with a technology firm relates to startups engaging in behaviors commonly seen as antecedents to more ethical product development. In the absence of regulation, large technology firms, for better or worse, play a large role in setting norms and guiding more ethical AI development. Startups may find it difficult to navigate complex ethical issues without these relationships or may not have robust enough data resources in-house to develop their products or drop less representative data.”

The conclusions drawn on investor funding are more interesting. The researchers frame the challenge faced by investors as follows:

“Information asymmetries about the startup's practices make it difficult for investors to determine which startups are better investments. This has likely only been exacerbated in recent years by the onset of "spray and pray" investing, where venture capital firms are less well-connected to startups than previously.”

Investors need to look for ‘signals’ about the quality of the investment they are being asked to make. This was easier in the case of traditional technologies because IP instruments, such as patents, were important in “reducing information asymmetries and signalling quality to venture capital investors..[but] patents are less commonly used by high-tech start-ups developing AI products, as code-based operating systems and software are difficult to protect under current IP regimes.”
So what signals can VC look for in an AI start-up? The researchers argue that the mere fact of having an AI policy will not be seen by investors as a good enough signal:

“The adoption of an ethical AI policy is not a particularly costly way for startups to signal their willingness and ability to adopt the extant ethical norms in their industry. They could copy stock language from the publicly accessible policies of other firms, or they could choose not to enforce or abide by their policy. Even if this policy is a talking point with investors, it seems unlikely that investors would value this action.”

In effect, it could be little more than ‘AI ethics washing’.

However, the researchers posit that investors may be more inclined to invest in AI start-ups that take more costly preventative actions exemplifying their prioritization of ethical AI development. Though some of these actions remain unseen, it is easy for investors to gauge if the start-up is hiring more diverse talent, including female or minority programmers or employees with a background in data privacy, ethical data usage, or training. An ethical commitment that forms part of the core values or mission statement of an AI start-up, that is genuinely connected with its objectives, and that is actively built into and endorsed through its operations and day-to-day processes may well combine visibility with impact.

Of course, start-ups looking for ways to signal their quality will need “to invest their scarce resources into preparing their firms for the ethical issues that lie ahead in a way that is visible to investors.” But this is not, as some US Republicans have argued in relation to the failure of the Silicon Valley Bank, a case of ‘wokeness’ burdening start-ups.

Regulation of AI is increasing, and the original developers of an AI system may be in the frame for loss or damage caused by high risk AI where they have not taken reasonable care or, if a product liability approach is adopted, they will face strict liability if the AI is ‘defective’: for example, the EU’s AI Act. Investors also may need to worry about PR-related issues or reduced exit opportunities for start-ups with ethical issues: ‘ethically light’ AI could become as much a concern for people investing in VC funds as ‘carbon heavy’ industries.

Read more: The Role of Ethical Principles in AI Startups

""