The Best of Both Worlds: Human Developers and AI Collaborators | by Mark Ridley | Aug, 2023


Now, I’d like to address many of the thought provoking challenges I’ve received in conversations on this topic in the last few months, as well as addressing the fact that I’ve made an awful lot of assumptions in the service of my original hypothesis.

I’ve left a lot of potential counterarguments unaddressed, not because they haven’t been bothering me, but because there’s just too much uncertainty and too much work still to do in experimenting with these tools and seeing how they really assist teams.

Here’s a big old list of many criticisms and unanswered questions:

You’re putting a lot of faith in AI tools to write quality code. They can’t possibly be as good as humans.

I admit it, I’m impressed with the quality of the tools, but I’m also not trying to make the case that they need to be as good as great developers. I have deliberately avoided modelling a scenario without technical expertise, because in my experience in using the tools they still need a great deal of supervision and guidance. Supervision to make sure that they weren’t doing anything stupid or dangerous (like writing API keys into source code), and guidance to ensure that they were doing the right thing (using the correct language, or framework or design pattern).

I have a sense that these AI tools need to be the McDonalds of software engineering; while going to a great, non-chain restaurant with exceptional staff can be a moment of transcendent experience, it’s not always possible. McDonalds is universally clean, cheap, healthy (I mean, not infested with bacteria) and, above all, consistent. As one of my dearest Italian friends once said when confronted with a big chain delivery pizza, “It doesn’t kill you”. To an extent this is the bar we’re shooting for with AI tools.

But, I also don’t think this is the end of the story. The tools we see today are nowhere close to the quality that they will be in a year. Even as I edited the original article into this series, there was news ever single day about more improvements; UC Berkeley introduces a model that writes better API calls than GPT4. Microsoft announces ‘dilated attention’, which allows models to scale to billions of tokens with LongNet. StackOverflow announces OverflowAI with promises of even more snarky responses to silly questions (sorry, I mean, better and more useful search).

Even if you’re sceptical about the ability of the tools today, I fear it would be short-sighted to ignore the potential of the capabilities they are likely to develop.

[Edit: Even in the week or so since I first drafted this article, Stack Overflow has announced OverflowAI, Github has announced additional tools to prevent intellectual property issues and StabilityAI has announced a coding focused LLM. The market moves at astonishing pace]

AI tools will have intellectual property issues. And security problems. And if they stop working and we won’t be able to work anymore

Yep, these are all possible.

But, it’s not our first time working around issues like this, and we already have ways of mitigating them. A lot of the companies I talk to are in some kind of paralysis out of a concern that they will be leaking company secrets which will be used to train the LLM further. Whilst this can be true for free and individual subscriptions, I would strongly recommend that you, dear reader, do your own research on the larger providers to understand exactly what the risk of that is, and what the vendors are doing to address this very reasonable concern. Have a look at some of the FAQs from the big players to see whether there is a sufficiently good answer for your use case and risk profile: OpenAI, Github Copilot, AWS CodeWhisperer (Google Duet is still in closed beta and data security docs weren’t available).

It’s a similar case with security and data protection. Most of you reading today are already dependent on Github, Microsoft or AWS security. You probably either store your code in Github, or your apps or data on Azure, GCP or Amazon. Ask yourself why you are content to accept the risk for a hypercloud vendor, but not for a coding tool. The risk of using ChatGPT is non-negligible, with a data leak reported in May, news of jailbreaking potential reported this week, and persistent data leakage from internal users reported by cloud security vendor, Netskope. As with any other piece of technology, you can choose to simply ban it from your organisation but people, like nature, always find a way. To properly address security issues you need to educate users and provide secure, easy to use alternatives. If OpenAI isn’t up to the task, maybe one of the other vendors is.

Another worry is inadvertent exposure to intellectual property risk; for example where the model has been (‘accidentally’?) trained on material which is closed source, and the tool exposes your organisation to the risk that you will be breaching the law (and carry risks to remedy your breach) simply by using the tool. Here’s the bad news — if you think this is a new risk, you should probably take a closer look at your use of Open Source in your organisation. Many companies fall far short of properly managing and understanding the risks of their ‘Software Bill of Materials’ (SBOM), the list of closed and open source dependencies that their software has. You absolutely should be concerned about the risk that one of these tools might incidentally put you in breach of someone else’s Intellectual Property Rights, but you should extend the controls that you already use for Open Source software and your developers’ big, red cut and paste button.

These risks are yours, and if you’re serious about investigating the opportunities that these tools may provide, you should also read the docs and speak to the vendors about the steps that they are taking to protect you from this risk. Ensure that you do your homework. The Common Sense privacy report for Copilot scored it 63%, low enough to earn an ‘amber’ warning (it scored particularly low on ‘School’ and ‘Parental Consent’ which dragged it down).

This should always be part of your procurement process anyway. You should always consider any tool you are going to put near production code or data from a risk perspective, and it falls to you to decide on your appetite for risk and how to mitigate it.

As a more positive note, I think that Salesforce’s recent announcements are a good indication of the direction that these tools will head. A huge part of Salesforce’s marketing push for AI Day focused around what they call ‘Einstein Trust Layer’, which seems to be a genuinely impressive wrapper for different LLM models that goes a long way to securing access and protecting both customer and company information (seriously, check out this video, even though it’s not about code).

We’ve just seen seven major tech companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) sign up to voluntary ‘Responsible AI’ commitments that includes watermarking output. It’s reasonable to assume that the largest players in this market, most already companies we entrust with huge amounts of our data and infrastructure, will release similar trust and security wrappers that answer a lot of outstanding questions about intellectual property, security, privacy and the propensity for LLMs to be a little bit toxic and have a questionable relationship with the truth.

Someone will still need to architect the overall solution.

See also:

  • Someone is going to need to manage all of the data and the data pipelines
  • Someone will need to manage the overlaps between applications
  • Someone will need to secure the products
  • Someone will need to monitor and respond to issues
  • Someone will need to manage dependencies and interactions between teams and products

Yep, you’re right. There’s still an awful lot of jobs that need to be done that aren’t just writing code.

This is great!

It means we still need people for a bit longer, and starts to show us where we should be providing training for our engineers. But we should also expect that tooling, automation and better tested, higher quality code will start to impact this list of issues positively, too. With more documentation, less ‘clever’ code, cleaner interfaces, better explainability and better test coverage, we may see fewer of these types of challenges anyway.

But most of an engineer’s time isn’t actually spent coding because we keep getting told to go to stupid meetings

Yes, true. AI isn’t going to solve all the problems, but if you’re an engineer spending more time in meetings than coding, the problem isn’t with your productivity, it’s with how your organisation is run.

Maybe AI will solve that one for you one day, but in the shorter term it’s possible that smaller, leaner and more automated organisations won’t need so many meetings.

To be frank, for most organisations the best way to improve developer productivity is to do a better job of prioritising work, saying no to low value outcomes and giving more time back to teams to focus on delivering high quality products. But, if we can’t have that, maybe we can have AI coding assistants instead.

Couldn’t we replace product managers with AI instead?

Ah, this is a good one.

One of the surprising bits of feedback I’ve already had for these articles has been, “that’s really interesting, but in my business we don’t have anywhere near enough product support”. It turns out that I was showing an unconscious bias with my 5:1 ratio of engineers, and many tech teams were already struggling hugely because their ratios were not only much higher than that (let’s say 10:1 or more), but also that the product skill wasn’t valued highly enough within the organisation. Some companies still seem to think that engineers write code and product managers are an expensive luxury.

I think eliciting requirements from customers is a really big deal. I also think that running an effective and easy to understand commercial prioritisation process is very hard. It’s going to be a while before we can get stakeholders to do their own prompt design for software.

Product managers are critical to great product engineering. Product managers focus engineering resource and help the business identify the most valuable outcomes. I think the last thing we need right now is a reduction in product support, even if we can assist some of their daily tasks with automation.

My suspicion is that the product manager and tech lead roles are going to be the most critical ones in these new teams.

You haven’t thought about X or Y or Z

You’re almost certainly right. But if you have, that’s great! Drop me a comment, start a discussion or best of all write a new article explaining your thoughts on Y or Z. (But not X. We don’t talk about X anymore)



Source link

Leave a Comment