Responsible and Secure AI for Future

Join the Working Group for Developing Responsible and Securing AI
Canva Design DAFpBNu9C0g
We are excited to invite industry practitioners and leaders to join a collaborative working group aimed at developing a comprehensive framework for securing AI. This initiative brings together experts from various domains, including cybersecurity, governance, regulatory compliance, and privacy, to address the unique challenges associated with AI adoption, with a specific focus on Generative AI technology. We are seeking enthusiastic individuals who can contribute their knowledge and expertise to shape the future of secure AI practices.

Background:
The rapid advancements in AI technologies, particularly Generative AI, have presented organizations with new opportunities and challenges. To ensure responsible and secure AI adoption, it is crucial to develop a framework that provides guidance and guardrails to organizations as they build, adopt, or utilize Generative AI and other AI technologies. The working group aims to create an initial framework quickly, followed by iterative improvements, to address the specific issues, challenges, and cyber threats associated with Generative AI.

Working Group Structure:


1. Chair: The working group will have a chair responsible for overall coordination and leadership.
2. Co-chairs: There may be one or more co-chairs who will assist the chair in facilitating discussions and driving the working group's activities.
3. Category Leaders: Each sub-working group category, such as cybersecurity, governance, regulatory compliance, and privacy, will have a dedicated leader responsible for guiding discussions and insights within their respective domain.

Meeting Schedule:
1. The working group will initially meet bi-weekly for a period of three months to kickstart the framework development process.
2. Following the initial phase, the meetings will be held on a monthly basis, ensuring continued progress and collaboration.
3. Sub-working group category members may meet more frequently, approximately on a weekly basis, to delve deeper into their specific areas of expertise and contribute to the development of guardrails.

Project Scope and Onboarding Instructions:
1. The working group will start by preparing a summary of how Generative AI works, highlighting its unique characteristics and potential risks.
2. A comparative analysis will be conducted to identify the key differences in issues, challenges, and cyber threats between Generative AI and existing technologies like cloud security.
3. The initial focus will be on creating guardrails that can be adopted quickly, providing organizations with immediate guidance while allowing for iterative improvements and scalability.

Participation Guidelines:
1. We are seeking volunteers with expertise in one or more of the following areas: cybersecurity, governance, regulatory compliance, privacy, AI technologies, and Generative AI.
2. As a member of the working group, active participation, collaboration, and contribution to discussions are essential.
3. Respectful and inclusive behavior towards all participants is expected, fostering an environment conducive to open dialogue and idea exchange.
4. Members are encouraged to share their experiences, best practices, and research findings to enrich the development of the framework.
5. All contributions to the working group will be duly acknowledged, and members will have the opportunity to be recognized as industry thought leaders.

How to Volunteer:
1. Interested individuals are requested to email their expression of interest to cffprograms@cyberfuturefoundation.org.
2. Please include your full name, contact information, professional background, and areas of expertise in the email.
3. Briefly outline your motivation to join the working group and any relevant experience or contributions you can bring to the project.

Working Group Meeting:

Initiation - AUG 29, 2023 (Tue 10 - 11 AM US Central Time)

Recurring meetings will be decided by Chairs and Co-Chair through voting.

We look forward to assembling a dynamic and diverse working group that will contribute to the development of a robust framework for securing AI. Your participation will help shape the future of AI technologies and ensure responsible and secure adoption. If you have any queries or require further information, please contact CFF Programs at: cffprograms@cyberfuturefoundation.org.

Best regards,

CFF Programs Team

CFF SAIF Working Group Intake Form

Structure of CFF SAIF Working Group

Chair

The working group will have a chair responsible for overall coordination and leadership.

Co-chairs

There may be one or more co-chairs who will assist the chair in facilitating discussions and driving the working group's activities.

Category Leaders

Each sub-working group category, such as cybersecurity, governance, regulatory compliance, and privacy, will have a dedicated leader responsible for guiding discussions and insights within their respective domain.

CFF Responsible and Secure AI CoE Townhall

Check out our CFF Responsible and Secure AI CoE Townhall with leaders and practitioners from our community getting together for supporting the cause of responsible and secure AI framework development.

Canva Design DAFpBHO16lg

Follow our other projects..

Cyber Board Book

Project Gateway

Project Indigo

Join as a Member!