General Rules and Policy of TiFA Challenge
General Introduction
Our challenge will be a competition featuring two main tasks designed to advance the field of multimodal language model trustworthiness and the development of trustworthy agents. Both challenges aim to deepen our understanding of AI robustness and trustworthiness, leveraging cutting-edge datasets and evaluation methods. We invite researchers and practitioners from around the world to participate, contribute, and push the boundaries of what is possible in the realm of trustworthy AI systems.
Policies
Deadlines:
All deadlines (including challenge results submission, research proposals and winner technical reports) are strict. In no circumstances will extensions be given.
Ethics:
Authors and members of the program committee, including reviewers, are expected to follow standard ethical guidelines. Plagiarism in any form is strictly forbidden as is unethical use of privileged information by reviewers and meta reviewers, such as sharing this information or using it for any other purpose than the reviewing process. All suspected unethical behaviors will be investigated by an ethics board and individuals found violating the rules may face sanctions. This year, we will collect names of individuals that have been found to have violated these standards; if individuals representing conferences, journals, or other organizations request this list for decision making purposes, we may make this information available to them.
The use of LLMs is allowed as a general-purpose writing assist tool. Authors should understand that they take full responsibility for the contents of their papers, including content generated by LLMs that could be construed as plagiarism or scientific misconduct (e.g., fabrication of facts). LLMs are not eligible for authorship.
Track I
MLLM Attack
Introduction
The primary goal of this challenge is to execute a successful attack on a MLLM, Llava-1.5. Participants must alter either the input image or text to significantly impair the model's accuracy. The core of this challenge involves ingeniously designing inputs that prompt the MLLM to generate incorrect or harmful outputs, thus evaluating the model's robustness against attacks.
Generally, for each given input pair (I, T), we aim to design specific MLLM attack methods to automatically construct image adversarial perturbation ΔI or a textual prompt ΔT, such that the target MLLM may generate inaccurate or unsafe outputs (i.e., choice for multiple-choice questions and sentences for harmlessness evaluation) with prompted inputs (i.e., (I+ΔI, T) / (I, T+ΔT) / (I+ΔI, T) / (I+ΔI,T+ΔT)). The similarity between the origin and the adversarial should be larger than 0.9. The lower accuracy or more unsafe responses indicates the better attack methods.
Track II
Frontiers in Trustworthy Agents
Introduction
The evolution of Artificial Intelligence (AI) reflects its increasing capability and integration into our daily lives, from basic automated tools to sophisticated, autonomous systems. Initially, AI systems were simple agents performing goal-directed actions without specific human commands. Over time, these evolved into Multimodal Large Language Models (MLLMs, e.g., GPT-4 / GPT-4o and Gemini), which not only execute complex tasks but also enhance decision-making through advanced language understanding and generation capabilities. The most advanced tier of AI development includes ethically aligned, trustworthy agents. These agents are designed to operate reliably within ethical and safety frameworks, illustrating both the benefits and the essential need to manage the risks associated with AI's integration into society. This tiered framework—from basic agents, through linguistically skilled LLMs and MLLMs, to ethically guided trustworthy agents—underscores the progression and potential of AI in enhancing human capabilities and addressing complex challenges.
Frequently Asked Questions
Can we submit a paper that will also be submitted to NeurIPS 2024?
Yes.
Can we submit a paper that was accepted at ICLR 2024?
No. ICML prohibits main conference publication from appearing concurrently at the workshops.
Will the reviews be made available to authors?
Yes.
I have a question not addressed here, whom should I contact?
Email organizers at icmltifaworkshop@gmail.com