Today, Salesforce revealed the main principles that must be adopted in the field of developing trusted artificial intelligence.
The company confirmed, according to (Terry Nicol), Vice President for the Middle East and North Africa, that it is constantly working to integrate ethical principles and controls into all its products, with the aim of helping customers to innovate and face any potential challenges proactively, thus continuing its approach in all its innovations.
In light of the huge opportunities and challenges emerging in this field, (Terry Nicol) said that the new set of guidelines is an advanced step compared to the company's previous principles in developing reliable artificial intelligence, with the focus here being on the responsible development and implementation of generative AI technologies. Noting that these principles are still witnessing more work to keep pace with the initial steps taken by the transformational artificial intelligence technologies, and in this context, he affirmed the company's commitment to learning, benefiting from experiences, and cooperating with others in order to find solutions.
Here are the five set of principles for ensuring AI reliability:
Precision
Deliver verifiable AI results that balance accuracy, validity, and return rates in models by enabling customers to train models using their own data.
Channels of communication should be opened between all parties in cases of uncertainty about the integrity of AI responses, and users should be able to verify these responses themselves.
This can be done by stating the sources, and justifying the reasons behind the responses provided by artificial intelligence, such as: tracing all the inferential stages it followed in order to arrive at the responses, highlighting the aspects that need to be checked again, such as: statistics, recommendations and dates, and publishing Controls that prevent the implementation of some actions completely automatically, such as: executing software within the production environment without human supervision.
safety
As with all AI models, every effort should be made to mitigate bias, abusive language, and harmful outputs by conducting assessments of bias, triggers of responses, and robustness of models while launching impartial tests to track down and find vulnerabilities.
The privacy of any personally identifiable information contained in the data used in training must also be protected, and controls in place to prevent additional harm, such as forcing software to be deployed in an isolated environment rather than being automatically pushed into a production environment.
honesty
You must respect data sources and ensure that consent is obtained when using data, such as: open sources or user sources. Be transparent by indicating that the content is AI-generated when it is self-submitted, such as in the case of the virtual assistant's response to the consumer and the use of unique hashtags.
empowerment
A distinction must be made between cases in which it is desirable to fully automate processes and other cases in which artificial intelligence must play a supportive role to humans.
The best balance must be found between the desire to support human capabilities to the fullest extent and making solutions in this field accessible to all, such as: creating alternative texts to identify the content of images to those who cannot see them.
sustainability
In conjunction with the work on creating more accurate models, AI models must be developed as appropriately sized as possible in order to reduce the carbon footprint. In the context of models of this intelligence, bigger does not always mean better because in some cases smaller, better-trained models outperform larger, less-trained models.