Recent developments in artificial intelligence (AI) may involve significant potential threats to personal data privacy, national security, and social and economic stability. AI-based solutions are often promoted as “intelligent” or “smart” because they are autonomous in optimizing various processes. Be-cause they can modify their behavior without human supervision by analyzing data from the environ-ment, AI-based systems may be more prone to malfunctions and malicious activities than conventional software systems. Moreover, due to existing regulatory gaps, development and operation of AI-based products are not yet subject to adequate risk management and administrative supervision. Resonating to recent reports about potential threats resulting from AI-based systems, this paper presents an outline of a prospective risk assessment for adaptive and autonomous products. This research resulted in exten-sive catalogs of possible damages, initiating events, and preventive policies that can be useful for risk managers involved in conducting risk assessment procedures for AI-based systems. The paper concludes with the analysis and discussion of changes in business, legal, and institutional environments required to ensure the public that AI-based solutions can be trusted, are transparent and safe, and can improve the quality of life.
Authors
Additional information
- DOI
- Digital Object Identifier link open in new tab 10.2478/fman-2021-0008
- Category
- Publikacja w czasopiśmie
- Type
- artykuły w czasopismach
- Language
- angielski
- Publication year
- 2021