{"id":451205,"date":"2024-10-20T09:16:36","date_gmt":"2024-10-20T09:16:36","guid":{"rendered":"https:\/\/pdfstandards.shop\/product\/uncategorized\/bsi-pd-iso-iec-tr-54692024\/"},"modified":"2024-10-26T17:17:14","modified_gmt":"2024-10-26T17:17:14","slug":"bsi-pd-iso-iec-tr-54692024","status":"publish","type":"product","link":"https:\/\/pdfstandards.shop\/product\/publishers\/bsi\/bsi-pd-iso-iec-tr-54692024\/","title":{"rendered":"BSI PD ISO\/IEC TR 5469:2024"},"content":{"rendered":"
PDF Pages<\/th>\n | PDF Title<\/th>\n<\/tr>\n | ||||||
---|---|---|---|---|---|---|---|
2<\/td>\n | undefined <\/td>\n<\/tr>\n | ||||||
7<\/td>\n | Foreword <\/td>\n<\/tr>\n | ||||||
8<\/td>\n | Introduction <\/td>\n<\/tr>\n | ||||||
11<\/td>\n | 1 Scope 2 Normative references 3 Terms and definitions <\/td>\n<\/tr>\n | ||||||
14<\/td>\n | 4 Abbreviated terms 5 Overview of functional safety 5.1 General <\/td>\n<\/tr>\n | ||||||
15<\/td>\n | 5.2 Functional safety <\/td>\n<\/tr>\n | ||||||
16<\/td>\n | 6 Use of AI technology in E\/E\/PE safety-related systems 6.1 Problem description 6.2 AI technology in E\/E\/PE safety-related systems <\/td>\n<\/tr>\n | ||||||
20<\/td>\n | 7 AI technology elements and the three-stage realization principle 7.1 Technology elements for AI model creation and execution <\/td>\n<\/tr>\n | ||||||
22<\/td>\n | 7.2 The three-stage realization principle of an AI system 7.3 Deriving acceptance criteria for the three-stage of the realization principle <\/td>\n<\/tr>\n | ||||||
23<\/td>\n | 8 Properties and related risk factors of AI systems 8.1 Overview 8.1.1 General 8.1.2 Algorithms and models <\/td>\n<\/tr>\n | ||||||
24<\/td>\n | 8.2 Level of automation and control <\/td>\n<\/tr>\n | ||||||
25<\/td>\n | 8.3 Degree of transparency and explainability <\/td>\n<\/tr>\n | ||||||
27<\/td>\n | 8.4 Issues related to environments 8.4.1 Complexity of the environment and vague specifications 8.4.2 Issues related to environmental changes <\/td>\n<\/tr>\n | ||||||
28<\/td>\n | 8.4.3 Issues related to learning from environment <\/td>\n<\/tr>\n | ||||||
29<\/td>\n | 8.5 Resilience to adversarial and intentional malicious inputs 8.5.1 Overview 8.5.2 General mitigations 8.5.3 AI model attacks: adversarial machine learning <\/td>\n<\/tr>\n | ||||||
30<\/td>\n | 8.6 AI hardware issues <\/td>\n<\/tr>\n | ||||||
31<\/td>\n | 8.7 Maturity of the technology 9 \u200bVerification and validation techniques 9.1 Overview <\/td>\n<\/tr>\n | ||||||
32<\/td>\n | 9.2 Problems related to verification and validation 9.2.1 Non-existence of an a priori specification 9.2.2 Non-separability of particular system behaviour 9.2.3 Limitation of test coverage 9.2.4 Non-predictable nature 9.2.5 Drifts and long-term risk mitigations <\/td>\n<\/tr>\n | ||||||
33<\/td>\n | 9.3 Possible solutions 9.3.1 General 9.3.2 Relationship between data distributions and HARA <\/td>\n<\/tr>\n | ||||||
34<\/td>\n | 9.3.3 Data preparation and model-level validation and verification <\/td>\n<\/tr>\n | ||||||
35<\/td>\n | 9.3.4 Choice of AI metrics 9.3.5 System-level testing <\/td>\n<\/tr>\n | ||||||
36<\/td>\n | 9.3.6 Mitigating techniques for data-size limitation 9.3.7 Notes and additional resources 9.4 Virtual and physical testing 9.4.1 General 9.4.2 Considerations on virtual testing <\/td>\n<\/tr>\n | ||||||
38<\/td>\n | 9.4.3 Considerations on physical testing <\/td>\n<\/tr>\n | ||||||
39<\/td>\n | 9.4.4 \u200bEvaluation of vulnerability to hardware random failures 9.5 \u200bMonitoring and incident feedback 9.6 A note on explainable AI <\/td>\n<\/tr>\n | ||||||
40<\/td>\n | 10 Control and mitigation measures 10.1 Overview 10.2 AI subsystem architectural considerations 10.2.1 Overview 10.2.2 Detection mechanisms for switching <\/td>\n<\/tr>\n | ||||||
43<\/td>\n | 10.2.3 Use of a supervision function with constraints to control the behaviour of a system to within safe limits <\/td>\n<\/tr>\n | ||||||
44<\/td>\n | 10.2.4 Redundancy, ensemble concepts and diversity <\/td>\n<\/tr>\n | ||||||
45<\/td>\n | 10.2.5 AI system design with statistical evaluation 10.3 Increase the reliability of components containing AI technology 10.3.1 Overview of AI component methods 10.3.2 Use of robust learning <\/td>\n<\/tr>\n | ||||||
46<\/td>\n | 10.3.3 Optimization and compression technologies <\/td>\n<\/tr>\n | ||||||
47<\/td>\n | 10.3.4 Attention mechanisms 10.3.5 Protection of the data and parameters <\/td>\n<\/tr>\n | ||||||
48<\/td>\n | 11 Processes and methodologies 11.1 General 11.2 Relationship between AI life cycle and functional safety life cycle <\/td>\n<\/tr>\n | ||||||
49<\/td>\n | 11.3 AI phases 11.4 Documentation and functional safety artefacts 11.5 Methodologies 11.5.1 Overview 11.5.2 Fault models <\/td>\n<\/tr>\n | ||||||
50<\/td>\n | 11.5.3 PFMEA for offline training of AI technology <\/td>\n<\/tr>\n | ||||||
51<\/td>\n | Annex A (informative) Applicability of IEC 61508-3 to AI technology elements <\/td>\n<\/tr>\n | ||||||
64<\/td>\n | Annex B (informative) Examples of applying the three-stage realization principle <\/td>\n<\/tr>\n | ||||||
69<\/td>\n | Annex C (informative) Possible process and useful technology for verification and validation <\/td>\n<\/tr>\n | ||||||
72<\/td>\n | Annex D (informative) Mapping between ISO\/IEC 5338 and the IEC 61508 series <\/td>\n<\/tr>\n | ||||||
75<\/td>\n | Bibliography <\/td>\n<\/tr>\n<\/table>\n","protected":false},"excerpt":{"rendered":" Artificial intelligence. Functional safety and AI systems<\/b><\/p>\n |