Guidance for integration and responsible use with Text Analytics

As Microsoft works to help customers safely develop and deploy solutions using the Text Analytics service, we are taking a principled approach to upholding personal agency and dignity by considering the fairness, reliability & safety, privacy & security, inclusiveness, transparency, and human accountability of our AI systems. These considerations are in line with our commitment to developing Responsible AI.

This article provides guidance on how to integrate and responsibly use the Text Analytics service responsibly, based on the knowledge and understanding from the team that created this product.

Integration and responsible use principles

When getting ready to integrate and use AI-powered products or features, the following activities help set you up for success:

  • Understand what it can do: Fully vet and review the capabilities of any AI model you are using to understand its capabilities and limitations.
  • Test with real, diverse data: Understand how your system will perform in your scenario by thoroughly testing it with real life conditions and data that reflects the diversity in your users, geography and deployment contexts. Small datasets, synthetic data and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance.
  • Respect an individual's right to privacy: Only collect data and information from individuals for lawful and justifiable purposes. Only use data and information that you have consent to use for this purpose.
  • Legal review: Obtain appropriate legal advice to review your solution, particularly if you will use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future.
  • System review: If you're planning to integrate and responsibly use an AI-powered product or feature into an existing system of software, customers and organizational processes, take the time to understand how each part of your system will be affected. Consider how your AI solution aligns with Microsoft's Responsible AI principles.
  • Human in the loop: Keep a human in the loop. This means ensuring constant human oversight of the AI-powered product or feature and maintaining the role of humans in decision-making. Ensure you can have real-time human intervention in the solution to prevent harm. This enables you to manage where the AI model doesn't perform as required.
  • Security: Ensure your solution is secure and has adequate controls to preserve the integrity of your content and prevent any unauthorized access.
  • Customer feedback loop: Provide a feedback channel that allows users and individuals to report issues with the service once it's been deployed. Once you've deployed an AI-powered product or feature it requires ongoing monitoring and improvement – be ready to implement any feedback and suggestions for improvement.

Integration and responsible use for Text Analytics PII

Make sure that proper consent has been given for the use of the data you are running the PII service on. For example, if a company has resumes from past job applicants who have not been hired and the applicants didn't give consent to be contacted about promotional events, the PII service should not be used on the resumes to collect contact information to be used for inviting them to a trade show.

Integration and responsible use for Text Analytics for Health

  • Healthcare related data protections: Healthcare data has special protections in various jurisdictions, like HIPAA in the US. Make sure you know the regulations for your jurisdiction and you take special care for security and data protections when building your system given the sensitive nature of health related data. The Azure architecture center has articles on medical data storage and HIPAA and HITRUST compliant health data AI that you may find helpful.
  • Protecting PII and PHI: Text Analytics for health does not anonymize the data you send to the service. If your system will present the response from the system with the original data, you may want to consider using Text Analytics Named Entity Recognition PII service to identify and remove these entities.

Learn more about Responsible AI

See also