AI Policy
The landscape of artificial intelligence (AI) usage in scholarship and publishing is evolving. The editorial and management team of CFS/RCÉA acknowledges that AI tools can offer value in many scholarly processes, including ideation and concept refinement, data analysis, image/figure/table creation, audio/video production, linguistic translation, language/grammar correction, and identifying subject-specific resources. Nonetheless, AI tools also generate errors and falsehoods, can compromise the rights and licenses of authors and creators, and have themselves been created within historical biases, privileges, and prejudices. The environmental impact of the use of AI, including water and energy consumption, is also of significant ecological concern.
Within this context, CFS/RCÉA accepts that AI tools may be used in the creation of material submitted to the journal, under the following conditions:
- The human authors whose names are attached to the submission must have played the primary role in the conception and construction of the content.
- A disclosure statement on the use of any AI tool(s) must be provided at time of submission.
This policy covers content submitted for publication as well as texts submitted by peer reviewers.
If a submission for publication has been created using AI tools, a statement attesting to this must be included in the Comments for the Editor section (at time of submission) and an AI Disclosure Statement must be uploaded to the CFS/RCÉA platform as an accompanying file. (Use this link to download and save a copy of the form to your local computer in order to complete it.) The author and editor of the publication will then determine whether a disclosure statement should be included in the final publication.
If a peer reviewer report has been created using AI tools, the reviewer must check the AI-usage checkbox on their report form and include a statement regarding the purpose and extent of AI usage.
Authors and reviewers should contact their CFS/RCÉA editor, a co-Editor-in-Chief, or a co-Managing Editor if they have any questions or concerns regarding this policy.
In the event of any discrepancy or confusion, all responsibility and/or liability will be assumed by the named author(s) of the publications in question.
A note on food studies and our AI policy
While we acknowledge that generative AI may offer benefits to actors in food scholarship, practice, resistance, and creation, we also believe that caring and attentive human engagement is critical for bringing about greater food systems sustainability, justice, equity, diversity, and well-being. Moreover, given the problematic histories of technocratic ‘solution’-making, corporate capitalism’s efforts to exploit and control nature-culture systems, and the high environmental costs of digital infrastructures and their usage, we are also wary of the risks embedded in the normalization of AI systems and tools.
For these reasons, and until greater clarity about the use of AI (and the motivations of its creators) has been established, CFS/RCÉA has opted to establish the policy and procedures above—and to continue evolving both—as implications regarding AI themselves evolve. We remain open to discussion on this subject, however, and welcome your feedback. Please contact us with your comments and suggestions.