Presentation
Virtual AI4SE 2023: Closed Systems Engineering for Artificial Intelligent Systems
Publication Date: 2023-10-11Start Date: 2023-10-11
End Date: 2023-10-12
Event: 2023 AI4SE & SE4AI Virtual Workshop
Event: Virtual
Publication: 10/20/2023
Co-Authors:
Dr. Tyler Cody
Dr. Alejandro Salado
Dr. Peter Beling
1 Abstract
As intelligent systems become increasingly integrated into our daily lives, it has been evident that there is a mismatch between traditional systems engineering(SE) practices and the nature of intelligent systems. The main purpose of this paper is to address the current gaps in SE for Artificial Intelligence (AI) (SE4AI) [1–3], focusing on two primary aspects: scope and scale. We argue that the main gap in SE4AI lies within these two aspects. While scope and scalability issues are not unique to AI-enabled systems and exist in other complex systems, the nature of these problems and their potential solutions is unique in AI-enabled systems. The potential impacts of intelligence property on the outcomes of AI-enabled systems and the need to align SE practices with such impacts have been recently investigated and challenges have been identified [3–5]. The importance of this line of research amplifies as we transition from engineering “narrow AI” (weak AI) to systems with Artificial General Intelligence (AGI) or “strong AI” [2,6,7]. Narrow AI refers to AI-enabled systems that can perform limited tasks, while AGI or strong AI refers to systems that can successfully perform any intellectual tasks that a human being can [8]. We posit that many of these challenges in SE4AI are linked to the problem of scale and scope of AI components to the entire system architecture, as well as the context of the operating environment.
1.1 Scaling Problem
One of the main challenges facing AI-enabled systems is scalable intelligence, which refers to the ability to increase intelligence capabilities in a way that is efficient and effective in a variety of contexts. In intelligent systems, and by extension any intelligent component that forms it, may always be in a state of learning. Consequently, every activity that an intelligent system goes through becomes a learning opportunity that the system may use. In this situation, deriving low-level use-cases for system’s states and actions may not necessarily satisfy the transition of the ”becoming” aspect of the system to the ”being” aspect of the system at the higher level [9] which is enabled by scaling the intelligence property in such systems. To characterize the transition of the learning process (”becoming aspect”) of an AI-enabled system to the functional morphology (”being aspect”) of that system at a higher level, traditional SE, although necessary, may not be sufficient.
1.2 Scoping Problem
The process of scoping is a crucial part of the problem definition phase in designing a system [10]. It is the first step in understanding what type of problem needs to be solved. Scoping process usually includes capturing a context diagram, constructing operation definitions diagrams, and consequently use-case specifications. However, scoping the environment in intelligent systems can be challenging as the system’s behaviors emerge mainly in relation to its context [11]. In intelligent systems, the dissolution of boundaries between the system and its environment results in capabilities derived from the system’s intelligence being dependent on the relational property between the system and its environment. This relational property serves as an indicator of the scope of such capabilities. However, current SE methods tend to enforce a clear boundary around the system, leading to scoping the system solely through input-output interactions with the environment. To model scope using input-output relations, one needs to capture an instance of a pre-condition in the environment to establish a clear system boundary. Each scenario must be modeled under specific pre-defined environmental conditions. This exponentially increases the number of scenarios required to engineer the scope of intelligence capability.
1.3 Solution Proposal
One solution that has been proposed to address these challenges is to incorporate closed systems precepts from systems theory into formal SE practices. The concept of closed systems is based on the idea that a system is an entity that is bounded by a physical or conceptual boundary and is isolated from its environment. Closed systems are self-contained and self-regulating, and their behavior can be predicted and controlled based on the system’s internal rules and mechanisms. Closed Systems Engineering (CSE) can play a significant role as a potential solution to bounding the scale and scope for such systems in their operating environment. In this paper, we will demonstrate how closed system precepts can be applied to SE practices using a simple example of an intelligent system. We will explore different types of applying closed system precepts to address scaling and scoping problems in AI-enabled systems.