Using the Best Practices
The Best Practices are extensive and can be useful in many situations. Read what they can be used for and how they fit in the bigger picture.
What can the Best Practices be used for?
The Best Practices can be used for the following:
A way to get started with implementing ethical and responsible machine learning in products;
A common language between data scientists, engineers, managers and governance & compliance professionals;
A repository for responsible machine learning questions;
A point of reference for machine learning audits, policies, governance and regulations.
The Best Practices are very versatile. To enhance their versatility more, there are other aspects of the ecosystem one can turn to:
User Guides - Where to start the implementation, depending on the organisational context;.
Open Source Portal - Contributing to the Best Practices wiki-style;
Supporting Material - More elaborate descriptions of the Best Practices and additional examples will be added and contributed;
Tools ecosystem - Helping to execute, organize and govern controls. There are many opportunities for useful tools that will be developed with the community.
The Best Practices do not make assumptions about:
the size of the organisation applying them;
the machine learning technologies used; and/or
the domains or fields they are being applied to.
This means that the Best Practices are written in a general manner, and their contextual relevance (whether collectively or for each part) will differ from case to case. Again, this highlights the importance of context and its appreciation. This means that risks that are very particular to, for example a specific technology or domain, are not included in the Best Practices.
This also means that applying the Best Practices is no guarantee that machine learning applications will be inherently ethical, safe or responsible. There are simply too many context relevant details for each product that cannot be captured in a general framework to achieve this. However, the Best Practices will help to move significantly towards ensuring ethical, safe and responsible machine learning if applied with appreciation for the context determined risks.