Editorial Policy on the Use of Artificial Intelligence (AI) Tools

  1. General Framework

This policy establishes the standards of Configurações: Revista de Ciências Sociais regarding the responsible use of Artificial Intelligence (AI) tools by authors, reviewers and the editorial team throughout the process of submission, review, editing and publication of manuscripts. This policy must be considered in conjunction with the Journal’s Code of Ethics, the Code of Ethical Conduct of the University of Minho​, the UMinho Editora Good Practice Guide and international editorial best practice, in particular the guidelines of the Committee on Publication Ethics (COPE) and the European Commission, as well as other widely recognised references, such as the guidelines of the Directory of Open Access Journals (DOAJ).

  1. Fundamental Principles

The use of AI tools in Configurações: Revista de Ciências Sociais must comply with a set of fundamental principles, namely:

  • Human responsibility: scientific, ethical and legal responsibility for the content submitted always lies with the authors and cannot be delegated to automated systems;
  • Authorship and originality: AI tools may not be credited as authors or co-authors, as they cannot assume intellectual or legal responsibility for content;
  • Transparency: the use of AI must be clearly declared by the authors;
  • Scientific integrity: AI may not replace the critical, interpretative and analytical work inherent to scientific research;
  • Respect for ethics and confidentiality: AI use must comply with the applicable ethical standards and safeguard sensitive and confidential data.

 

  1. Guidelines for Authors

 Permitted Uses

Configurações considers responsible use of AI tools to be admissible for:

  • Exploring and developing early research ideas, theoretical frameworks or discursive structures;
  • Supporting the organisation, classification and synthesis of existing scientific literature, provided that the content generated is carefully verified and fully understood by the authors;
  • Identifying relevant gaps in the discursive framework;
  • Improving the linguistic quality of the manuscript (grammatical, spelling and stylistic corrections);
  • Suggesting text reorganisation or clarification of the arguments set out.

In all cases, AI must be used solely as an auxiliary tool and not as a substitute for autonomous scientific reflection. Furthermore, its use must always be explicitly declared.

 

Limits and Prohibited Uses

AI tools may not be used to:

  • Replace the author’s intellectual work, including the critical review of the literature;
  • Fabricate data, citations, references or results;
  • Create or manipulate images, figures or tables, unless AI is explicitly integrated into the research methodology and clearly described as such;
  • Upload copyrighted content, unpublished manuscripts, databases or sensitive data without authorisation, thereby compromising the confidentiality of data and/or research participants.

 

Responsibilities of the Author

Authors must:

  • Ensure that the manuscript reflects their own original intellectual contribution;
  • Examine the Terms and Conditions of the AI tool(s) used, ensuring that:
    • Their use does not compromise the confidentiality of personal data (in accordance with applicable legislation, namely the GDPR) or research data;
    • The content entered (including draft manuscripts, datasets, tables, or supplementary materials) is not used for model training or for purposes other than the provision of the requested service, unless explicit and informed consent has been obtained;
    • Their use does not entail the transfer or assignment of intellectual property rights over the content submitted;
    • Their use does not impose any restrictions on the subsequent submission, peer review, or publication of the manuscript.
  • Verify the accuracy, integrity and impartiality of all AI-generated content;
  • Validate AI-generated data, interpretations and references using reliable, original sources;
  • Critically assess potential biases introduced by the AI tools;
  • Protect sensitive, personal or confidential data by refraining from entering such data into AI tools.

 

Mandatory Declaration of AI Use

Authors must disclose the use of AI tools by submitting an Artificial Intelligence (AI) Usage Declaration along with all other requested documents.

The declaration is mandatory for all submissions and must indicate:

  • Whether or not AI tools were used in the preparation of the manuscript;
  • Where relevant, the tool(s) used (name and version);
  • The nature and purpose of their use;
  • The process for human verification and review of the content generated;
  • The impact (or absence thereof) on the arguments and conclusions of the article.

Where relevant, AI use must also be mentioned in the methodology section or in another appropriate section, such as the Acknowledgements section, in order to ensure transparency towards readers regarding the practices used in the preparation of the manuscript.

 

  1. Guidelines for Reviewers
  • Manuscripts, data and materials submitted for review are strictly confidential and cannot be shared with third parties;
  • The uploading of manuscripts, excerpts, data, images, or any content related to the peer review process to unauthorized AI tools, in particular generative tools (whether public or private) that may retain, reuse, or use such content for model training or other purposes, is strictly prohibited. However, the use of institutional or contracted tools for plagiarism detection or similarity checking is permitted, provided that it is carried out in strict compliance with confidentiality obligations.
  • AI tools may not be used to analyse, summarise, evaluate or formulate scientific judgements about manuscripts, or to support the substantive content of review reports;
  • AI may be used exclusively for linguistic and stylistic improvement of the review text, the reviewer retaining full responsibility for all assessments and recommendations

Whenever a reviewer suspects or identifies inappropriate or undeclared use of AI tools in a manuscript, they must confidentially inform the Editor in Chief or the Editorial Team, refraining from contacting the authors directly or making accusations in the review report. The analysis and investigation of such incidents fall exclusively within the remit of the editorial team, in accordance with best practice in publication ethics.

 

  1. Guidelines for Editors

Editors are responsible for safeguarding the confidentiality, integrity and independence of the editorial process. Therefore:

  • They are prohibited from uploading any manuscripts or confidential materials to unauthorized AI tools, in particular external tools that may retain, reuse, or use such content for model training or for other purposes incompatible with the duty of confidentiality. Only tools that ensure compliance with the duty of confidentiality and data protection may be used.
  • AI may not be used to analyse manuscripts, support editorial decisions, or replace human editorial judgement;
  • The use of AI is admissible only for administrative or organisational tasks, or for improving the clarity and consistency of editorial communication, provided that no confidential content is involved and that human validation is always ensured;
  • Editors are responsible for verifying the existence, clarity and adequacy of AI use declarations submitted by authors, as well as assessing compliance with this policy.

As part of its commitment to scientific integrity and editorial transparency, Configurações: Revista de Ciências Sociais may also use automated tools to support the editorial process, particularly for the identification of textual similarities and potential signs of undisclosed AI use. These tools include the iThenticate platform, provided by the University of Minho and widely used in the context of international scientific publishing. iThenticate operates in a secure environment, does not make the manuscripts analysed public, does not transfer rights over submitted content and does not use texts to train AI systems, complying with all applicable data protection and confidentiality standards. The use of automated tools is strictly auxiliary and does not, under any circumstances, replace critical evaluation and human editorial judgement.

Whenever signs of inappropriate or undisclosed use of AI tools arise, editors shall analyse the situation in a prudent and proportionate manner, ensuring the parties involved are given an opportunity to explain and adopting appropriate measures in accordance with best practice in publication ethics.

 

  1. Non-Compliance and Editorial Procedures

Violations of this policy may include:

  • Omission or inaccuracy in the AI use declaration;
  • Use of AI for purposes deemed unacceptable under this policy;
  • Breaches of confidentiality;
  • Copyright infringement.

Any incidents identified shall be handled in accordance with the COPE guidelines. Depending on the seriousness and nature of the breach, the editorial team may take one or more of the following measures:

  • Request additional clarification from the authors regarding the use of AI tools;
  • Request correction or completion of the Artificial Intelligence (AI) Usage Declaration;
  • Request revisions to the manuscript (where AI use compromises its clarity, originality or scientific rigour);
  • Temporarily suspend the editorial process;
  • Reject the manuscript (in cases where a serious violation of this policy is confirmed);
  • In the case of articles that have already been published, issue corrections or editorial notes, or retract the article, where appropriate.

 

  1. Final provisions

This policy will be reviewed regularly to keep pace with evolving AI technologies and international guidelines on ethics and scientific integrity. Submission of manuscripts to the journal implies full acceptance of the rules set forth herein.