We aim to build a foundation for digital assurance by delivering proof-of-concept capabilities that include hardware, software, mathematical frameworks, technologies, tools, techniques, theories, workflows, methodologies, metrics, processes, approaches, and methods. By striving for scalability, generalizability, interoperability, and rigor, research under the DAHCS MC will enable us to efficiently and effectively characterize, assess, and manage digital risk across many types of HCS.
In the DAHCS MC vision, digital technologies are designed and evaluated as easily as other important components of HCS; a robust research community continues to push the state of the art in DAHCS efforts; and decision makers can make confident, evidence-based statements about the digital assurance of high consequence systems in a timely manner because we can:
Characterize the digital technologies within our systems at any point in their lifecycles
Assess the risks to our systems from digital technologies and adversaries, moving well beyond vulnerability-focused security
Select among design and implementation options that appropriately manage and accept digital risks while balancing against other trade-offs (e.g., resilience, safety, size, weight, power, cost)
Research Thrust
We seek research to improve creating, evaluating, and using evidence in these assurance cases (which make claims about digital technologies both within and directly influencing our testbed HCS controllers) within the following three Research Thrusts:
I. Scalable Analysis
Dramatically scale end-to-end DAHCS, seeking at least two orders of magnitude (when a baseline exists) improvement in time/cost or complexity of handled digital technologies
discovering the limits of hardware, state, and input complexity that we can reasonably analyze within given design and resource tradeoffs
characterizing tradeoffs needed to achieve given levels of digital assurance
extending and generalizing existing capabilities
Research Challenges
This includes Assuring Target Hardwareand Configuration (i.e., claiming that the physical hardware presents the expected digital abstraction – hardware logic is covered in the next Research Challenge), Behavior Coverage (i.e., claiming that hardware logic, software, and component behaviors meet requirements), and Force-multiplying Experts (i.e., scaling the expertise and human judgment needed for DAHCS).
II. Impact Analysis Amid Uncertainty
Measure and increase confidence in an assurance case and its evidence, e.g., by identifying what additional information is needed to increase confidence sufficiently
focusing and evaluating assurance cases
increasing our confidence in them using metrics that do not yet exist
appropriately allocating our limited resources
Research Challenges
Includes Intelligent Adversary and Hazard Modeling (e.g., explicitly accounting for adversary goals, choices, and capabilities), Model Inference Given Partial Information (i.e., overcoming obstacles to reasoning about a controller’s implementation when relevant design or environment details are incomplete or unreliable), and Failure Consequence Characterization (i.e., enabling end-to-end reasoning about consequences of failures, including understanding direct impacts such as the impact of a single timing delay, understanding aggregate failures like bit flips caused by radiation in conjunction with a minor timing delay, and understanding indirect impacts such as the follow-on failures that arise from a single upstream failure).
III. Integrating with Systems Engineering
Support systems-level decisions about digital assurance and residual risks, including making tradeoffs among digital technologies and digital design options
integrating and using digital assurance evidence within systems engineering approaches
revealing and characterizing emergent behaviors
specifying, understanding, and making effective system-level tradeoffs against digital assurance
Research Challenges
Includes Digital Composition (i.e., combining evidence across digital technologies as well as analysis techniques, abstraction levels, and processing contexts), System Assurability Tradeoff Analysis (i.e., directly comparing the impacts of implementation choices on digital assurance as well as other important characteristics like safety, reliability, resilience, size, weight, power, cost, or schedule), and Evidence Communication for Decision Support (i.e., supporting decision-makers with credible evidence about security and reliability, including characterizing factors that influence decision making).
Other Research Challenges
Revolutionary DAHCS
Approach DAHCS in entirely new ways that meet our needs of scalability, generalizability, integration, and rigor.
Targeted Evaluation
Demonstrate integration of DAHCS MC Laboratory Directed Research and Development (LDRD) into an ecosystem through small, applied research projects.
Learn more about our research roadmap and testbed.