Discuss and decide on a Fedora Council policy on the use of AI-based tooling for contributions #486
Labels
No labels
budget
Closed As
approved
Closed As
declined
Closed As
deferred
Closed As
duplicate
Closed As
no action needed
Closed As
resolved
code-of-conduct
documentation
elections
events
hackfests
initiatives
In Progress
mindshare
Needs Review
Next Meeting
policies
spending-request
ticket-vote
trademarks
urgent
No milestone
No project
No assignees
7 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Council/tickets#486
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Why?
In the Outreachy 2024 Orientation Call that took place on Friday, 08 Mar 2024, a bunch of AI assistant tools were used to transcribe what was spoken for the better ease of those who are participating as well as for those who were not participating (like me). Our applicants suggested tools from Otter.ai, Read.ai and some more and whilst they were used during the call, it was not until after the meeting that we realized that we perhaps need to be a lot more thorough in choosing which AI-based tooling to make use of for contributions.
AI is cool and all but I would be wary about the quality of contributions made with the help of AI-based tooling to the community as well as the safety of the contributors while using those. After all, we would hate to have our voices reused somewhere else without our express consent just because we overlooked some legalese written in as small a font size as possible. Also, a bunch of these tools make it mandatory for the users to log in and hence, they unknowingly consent to loads of marketing mail that they could totally do without.
What?
As a member of the Fedora Council, I ask that we discuss and decide on a policy for the use of AI-based tooling for making contributions. These could include but are not restricted to things like the nature of AI-based toolings used, the ways that one could maintain contribution quality, the things contributors need to be wary of while making a choice and the implications of copyright and trademark laws when a certain such tool is employed.
Some guidance at the project level would be helpful. We would likely need a legal consultation for this, but before we go for that, I suggest writing out our values/beliefs about what we feel makes sense as a project policy on this topic.
Some examples.
The Read.ai looks fancy and stuff with just how accurately it was able to provide the recap with divisions made in chapters/topics, action items and key questions from the Outreachy 2024 Orientation Call.
But then I see that they have made it mandatory for folks to sign up to access the logs.
And I slowly sense my excitement turning into wariness.
The Otter.ai looks nice as well.
Let us try to see if they let me in without agreeing with their shady terms.
Ahh... OK. Never mind.
Addendum
The email address on the last screenshot was automatically filled there when I clicked on the link that I received in my work email address and not something that I filled by myself. I personally refrained from using any of these and simply elected to rewatch the recording again. This is my personal stand on this situation and while I frown upon these practices, I do understand that AI-based toolings like these could be incredibly helpful in accessibility areas. Until we have a policy governing how these toolings can be made use of, the use of AI-based tooling for contribution is a mighty thin ice that folks would continue to walk on until one fine day it decides to cave in.
Metadata Update from @amoloney:
This remains my proposal from the Fedora Council meeting today:
I'm still hesitant on this. I understand your position, but I also feel we as Fedora are leaders in the broader open source community. There may be upstream projects out there waiting for a downstream user of their project to put forth a policy prohibiting code contributions from generative AI. For smaller projects, it makes it easier for them to say "well, my project is used in Fedora and Fedora prohibits generative AI code contributions so therefore I do not allow them in my project."
This could help continue pushing the issue that we do not have a resolution on how open source licensing compliance works (or, "creator rights" if you want to refer to everything) with generative AI.
I like point 3, but I do think Fedora needs something stronger specifically when it comes to code contributions.
Putting aside creator rights, I am also concerned about quality. From what I have seen from generative AI systems is that the code quality is just terrible. We work really hard in Fedora to not have terrible code. I don't want to add bad code at an exponential rate.
I don't think it's possible for you to make a statement (point 3) without creating a policy to back it (point 1). I'm not sure whether we should be saying things about what upstreams do per se (point 2), though.
We need a policy about Fedora contributions though.
I agree that point 3 implies point 1.
A statement about upstreams is difficult. For one, Fedora is already upstream for so many things in the open source space. But we also very clearly make the distinction between upstream and Fedora as downstream even when the people involved are the same.
Regarding contributions and specifically code contributions, I think a Fedora policy is necessary. We even direct many contributions back to upstream projects so a Fedora policy could have an indirect effect on the upstream projects.
Any policy we have I think Fedora needs to identify the concerns that exist right now around generative AI. I've mentioned my concerns around creator rights and licensing compliance as well as overall code quality. Until these are resolved in a way we can trust, I think Fedora needs to both recognize these as real problems and note the areas where generative AI contributions are not permitted. Fedora itself does not want to accidentally violate the terms of an open source license, for instance. Right now we can't say with certainty that generative AI guarantees that will never happen. All we know is that generative AI systems have and continue to violate open source licensing terms with generated code.
On the other hand, generative AI systems can be useful. For example, say I write a couple new public functions for a library and use ChatGPT to generate the initial draft of the API documentation from that code. Then I go and fix up some phrasing, add some clarifications, and otherwise make it sound less robotic. I have used it on code I wrote and I used it to help me write documentation. This is an example of where I'm torn on generative AI. Examples like this feel super useful while using generative AI to patch broken code feels very scary and problematic.
Metadata Update from @jflory7:
Metadata Update from @jflory7:
here's a non generative example of AI that may serve as a good example for future discussions:
I personally have used http://voiceinput.futo.org/, largely when composing long messages on my phone or writing essays for class. its simply speech to text, so I'm still doing all the creativity. The models are all on-device and nothing is sent via the internet. The code behind it is available too under a semi-open license (the organization supporting it is still working to figure out their stance on going fully open source with their licensing).
I personally love this tool because it saves so much time for typing long text on my phone and, even though i still have to put in time to editing for typos, it still takes less time for me to write the same message in most cases.
I think (per the last council meeting) it may be nice to create some kind of category system for these AI examples to make discussing them easier. Also maybe categorizing their use may be helpful since it seems like common themes against the use of AI include:
While otter and read.ai may fail the first of these tests, and AI code generation may create hard to spot bugs that fail the second, i think theres still some room for some AI (like FUTO voice or similar accessibility tools) to be used.
This isnt by any means a complete system for classifying all uses of AI in fedora, but i imagine this could be expanded into a flowchart or something to provide a repeatable, transparent way to "sort" the different uses of AI into buckets (sort of how theres a list of "tests" for fair use of copyrighted material but still some gray area to accommodate things changing in the future)
Metadata Update from @amoloney:
FYI this is in progress. Currently, we're gathering information with a user and contributor survey, which will run until July 31st.
Metadata Update from @amoloney: