Exigent™ AI Computer Vision Software

Exigent AI Computer Vision Software

AI Computer Vision Trained Algorithm

Archarithms’ Exigent™ Artificial Intelligence Computer Vision software application is built around a deep convolutional neural network architecture that exploits spatially-local correlation of features by enforcing a connectivity pattern in the synapses between neurons of adjacent layers. Coded in the Python programming language the network is trained using Archarithms’ state-of-the-art training algorithms and regularization techniques. The training techniques include Stochastic Gradient Descent, Variable Mini-batch Size, Variable Learning Rate, Dropout and Batch Normalization.

Get Pricing & Details
ImageBrew

ImageBrew

Computer Vision Data Processing API

ImageBrew is an AI computer vision data processing capability that automatically generates data from captured images, and augments the images with environmental effects and labels the data to required specifications. The data is then stored for computer vision algorithm training based on specific application needs.

Designed for computer vision product developers who create pipelines for processing image data, ImageBrew synthetically produces images of the highest quality without the need for additional curation efforts. Unlike existing solutions that require humans to manually isolate images, remove backgrounds, augment the images and label the data. ImageBrew is a fully automatic synthetic data processing API designed to integrate seamlessly with the developer’s system and processes.

Get Pricing & Details
AI Computer Vision Data Services

AI Computer Vision Data Services

Computer Vision Data

Archarithms Artificial Intelligence (AI) Computer Vision data provides synthetically produced data for those projects with limited data for AI algorithm training. This service is designed for global organizations with a need to isolate and classify objects from complex images and video, and perform data mining and pattern recognition across large datasets.

Archarithms Data Services uses a proprietary collection process that allows targeted collection of images ‘in the lab’ with complete control of all relevant variables. Unlike targeted collection performed ‘in the wild,’ Archarithms process allows active control of variables such as lighting, weather effects and background scenes that cannot be directly controlled when collecting natural images by traditional means.

The Archarithms AI Computer Vision data is ideal where complex inputs are needed to feed traditional programming systems that are used to generate insights, provide personalization, predict events, and make probabilistic recommendations at a greater scale than traditional technologies. The synthetic data provided by Archarithms is used to train AI computer vision algorithms to achieve robust performance where speed and accuracy are critical. Unlike data that is crowdsourced or collected from the internet, Archarithms synthetic data provides customized data based on thousands of raw or augmented images not readily available.

Get Pricing & Details
Athena

Athena

Human Machine Interface Software Platform

A combination of Human Machine Interface (HMI) software platform and User Experience (UX) evaluation process, Athena provides powerful tools for creating modern, usable HMI products that are tailored to the needs of the Warfighter in both operational and training situations. Using modern software technologies and a tiered, modular integration framework, we provide a cross-platform environment that allows rapid development, timing and effectiveness analysis, and rich visualization capabilities that immerse the Warfighter in the battlespace environment. Combined with a robust user experience analysis process, we iterate on displays while collecting timing data in order to provide a demonstrably effective system. The results of the system are HMIs that provide Situational Awareness coupled with Engagement Operations that are optimized to the needs of the user. HMI optimization is achieved by engineering displays that adhere to human processes rather than prescribing rigid interactions. In this fashion, the Warfighter usurps the system’s traditional role as the focal point – rather, the system serves as an assistant to the user’s decisions and processes.

Acquire

Acquire

Analysis Tool Supporting Combatant Command Course of Action

Archarithms’ Acquire is an analysis tool capable of supporting Combatant Command (COCOM) Course of Action (COA) and “what if” analyses. Acquire integrates defense design analysis, “what if” analysis, COA development and demonstrated COA effectiveness into a single state of the art system. Acquire seamlessly supports growth paths from Ballistic Missile Defense (BMD) to Integrated Air and Missile Defense (IAMD), space, intelligence and reconnaissance, cyber, and combinations of these domains. This Archarithms’ innovative solution is a flexible, extensible, comprehensive Command and Control (C2 COA) analysis and planning tool based on a Service Oriented Architecture (SOA) that allows users to automatically specify complex threat scenarios, integrated or stove- piped Blue/Coalition force capabilities, degradation of resources, different shot doctrines, varying Battle Management (BM) techniques and tuned communications performance. COAs are recommended based on defense design Figures of Merit (FOMs), detailed metrics comparisons, Defended Area (DA) and Launch Area Denied (LAD) analyses, visual gap analysis and Monte Carlo simulation analysis.

Acquire is composed of three components: Planning and Optimization, Execution, and Analysis. In Planning and Optimization, users develop Red and Blue/Coalition designs and scenarios, evaluate the design, optimize the Blue/Coalition lay downs, and refine the designs for logistics considerations. During Execution, users define or parameterize threat scenarios and execute single or multiple simulation runs for performance evaluation. The Analysis capability contributes to both planning and optimization and execution phases. A variety of metrics are automatically computed and presented to the user via the GUI.

BigIO

BigIO

Polyglot Messaging System

BIG IO is a fast, distributed, polyglot messaging system. The system can run in a single process or across multiple instances over a network. Message communication in a single process is extremely fast with the ability to handle in excess of 4.6 million messages per second. Even across the network, the system can process approximately 300,000 messages per second. An innovative alternative to DDS.

Bird's Eye

Bird's Eye

System of Systems Level Tasking Tool

Archarithms’ Bird’s Eye is a new generation of System of Systems (SoS) level tasking tool designed to exploit sensor availability, timelines, resources, Field-of-Regard (FOR), Field-of-View (FOV), geometry, resolution, sensitivity, and phenomenology. Bird’s Eye provides an efficient, balanced utilization of available sensor system resources for the purpose of defeating the threat with no leakage attributed to the sensor tasking function. The effect of high performance Integrated Sensor Tasking (IST) is information superiority from threat launch through impact characterization, and optimal contributions to raid Probability of Engagement Success (PES). Bird’s Eye supports Launch on Remote (LOR) and Engage on Remote (EOR). Its flexibility permits exploitation of OPIR, organic space-based assets, and airborne Electro-Optical InfraRed (EOIR) sensors.

Chrysalis

Chrysalis

Autonomous Situational Awareness System

Chrysalis is a fully autonomous UAS based situational awareness system. The UAS, in its idle state, resides in an environmental enclosure mounted to the owners property. When activated, either through predefined flight times or external triggering systems, the environmental enclosure opens and the UAS deploys to its pre-defined flight plan. Throughout its flight the UAS utilizes a 3-axis gimbal and HD camera to observe the owners property. Running a comparative analysis of video captured during the predefined flight routine to a control video, any changes to the property scene will trigger a text to owner containing an image of the change. Upon completion of the pre-defined flight path, the UAS will safely return to the environmental enclosure for data storage and battery charging.

Impression

Impression

Integrated Command and Control Planning and Training Framework

Impression is an integrated Command and Control (C2) planning and training framework focusing on modular decoupled agent-oriented design utilizing a distributed communication system permitting flexible information formats, information separation, and dynamic transportation. Impression's software architecture provides sufficient abstraction to permit flexible reuse of agents between various applications and a communication platform capable of accommodating such flexible agent and information design.

Impression’s architecture implements a, Modular Defense Planner (MDP) as a group of agents implementing representative C2 planner and trainer capabilities. The trainer leverages as many of the C2 planner capabilities as possible, reducing the number of required agents. The agents communicate with each other using two separate but complementary software tools: BigIO and Redis. BigIO, an Archarithms developed big data messaging middleware, is used to share transient information such as tracks or resource location between agents. Redis is used to share persistent, generally large information (e.g. a complete training scenario) between the agents. Impression's architecture and design are easily expanded to include capabilities provided by any set of adept agents. Further, the communication tools utilized are information format agnostic and can be readily adapted to multiple domains.

Edge

Edge Computing Devices

Deep Learning Computer

Our Deep Learning computer is designed specifically for Deep Learning applications. Edge uses CUDA capable General Purpose Graphics Processing Units (GPGPU) to exploit the parallel nature of Deep Learning architectures, providing significantly greater performance. Edge also provides data redundancy and SSD caching to mitigate the common bottleneck of data storage.

Edge's specifications are as follows:

GPUs

  • Over 12,000 total Cuda Cores
  • 336.5 GB/sec memory bandwidth
  • 48 GB Memory
  • Over 7 TFLOPS total
  • Liquid Cooled CPU
  • 64 GB DDR4
  • Dedicated 250 GB SSD for operating system, compilers and drivers
  • Data redundancy
  • Workstation class motherboard with 4-way PCI-E Gen3 x16 Support
  • OS: Ubuntu 14.04
  • GPU computing tools:
    • GPU parallel computing platform and application programming interface
    • C/C++ libraries for accelerated matrix algebra, convolution etc.
    • NVCC compiler
Insight

Insight

Course of Action Recommender

Insight is a Course of Action (COA) recommender that incorporates large quantities of human generated data, identifies relevant features of the data, reasons on the data to produce actionable Course of Action (COA) recommendations and dynamically learns as new information becomes available.

Insight’s design addresses the four fundamental challenges of COA recommendation using a combination of:

  1. Analogical data representation to easily map human entered data into machine understandable taxonomies
  2. A distributed and parallel NoSQL capability to accurately manage and query petabyte sized data sets
  3. Feature reduction to select applicable data sets for further reasoning
  4. Layered Boltzmann machines to discover latent feature relationships and refine the data set into a set of actionable COA items and classification techniques, which provides an understandable COA complete with data pedigree. In addition to providing latent feature extraction, Deep Belief Network (DBN) representations deliver supervised learning techniques that allow them to incorporate new data as it becomes available.

In 2014, Insight was was trained using over 1 million example “Hold-em” poker hands. Insight's fully trained DBN was able to mimic the decisions of a modeled player who won the 2014 Annual Computer Poker Competition with an accuracy of 99.9976% over 250,000 test hands.

Puma

Puma

Missile Performance Enhancement

PUMA mitigates missile performance degradation in challenging raid environments. PUMA combines proven penalty-based methods of characterizing potential engagement solutions with an innovative, finite horizon optimization assignment algorithm. The penalty-based approach quantifies contributors to Probability of Kill (PKill) and Probability of Engagement Success (PES) for concerns such as potential intercept flash, additional interceptors within the Field of View (FOV), closing speed, and others. Penalties are tailored to relevant aspects of the weapon system, and can be updated to reflect technology improvements without re-building software. The penalty-based approach is suited to handle one-on-one and many-on-many engagements, Fire Control (FC) from a single platform, and Battle Management (BM) from multiple platforms. Identical techniques can be applied to multiple weapon and threat types. For example, a platform with multiple Missile types capable of engaging ballistic missiles and air breathers can use the same fire control with penalties tailored to the specific weapon and threat types. Once viable engagement solutions for all weapon-threat pairs are scored, an enhancement assignment algorithm is used to select the optimal engagement(s) over the next time horizon. The algorithm accounts for finite resource availability (sensor, launcher, communications), assessed threat lethality, and each potential engagement’s score (penalty). PUMA maximizes the threat lethality negated and permits flexible firing doctrine to include N-on-1 engagements. In-progress and previously planned engagements are accounted for during each time horizon. Launch-On-Remote (LOR) and Engage-On-Remote (EOR) constructs are supported.

Using a high fidelity large raid simulation PUMA significantly reduced the chance an interceptor would encounter stray radiation due to flash, diverting kill vehicles, or celestial sources. The SSEKP was not degraded while using the advanced scheduling penalties despite having numerous candidate engagements with lower penalty (score) values being assigned resulting from predictions that the interceptor would be susceptible to interference from stray sources of infrared radiation. PUMA was able to identify and select candidate engagements to mitigate exposure to harmful stray radiation while maintaining battle space coverage. PUMA achieved an overall 72% reduction in the number of scheduled engagements where an Interceptor would be exposed to stray infrared radiation due to flash,other interceptor diverts, or celestial objects.

Stadium

Stadium

Sensor Resource Manager

Stadium is a comprehensive, coordinated Sensor Resource Manager that includes emplacement planning, search planning and automated distributed sensor tasking, all of which adapts in real time to counter the threat. Stadium’s encompasses three capabilities: A planner (planning phase), real-time distributed tasking capability (execution phase), and a mechanism for adapting to scenario dynamics (adjustment phase).

Planning Phase

If a sensor is out of position, its ability to assist with tracking and search during the course of a scenario is diminished. As a result, inferior planning can be seen as effectively reducing the total available resources of the force structure right from the outset. Therefore, in the planning phase, Stadium provides optimization of the placement and initial search assignments of a set of distributed sensors.

The generated plan places the sensors in the best possible position for successfully handling likely raid scenarios based on the combined resources of the force structure and the available a priori intelligence about the adversary’s most likely courses of action.

Execution Phase

In the execution phase, Stadium provides real-time maximization of the overall resources within the provided plan. That is, given the prioritization of the threats and search space and the layout of the available sensors, Stadium seeks to reduce the overall resource usage by optimally distributing the combined task load across the entire force, in real-time, as the scenario unfolds. Stadium provides a highly scalable and efficient solution for optimal sensor tasking. In addition, because the approach is decentralized, the loss of any single sensor or platform will not disrupt the tasking capabilities of the remaining force, which provides significant robustness and allows for graceful degradation.

Adjustment Phase

While the planning phase places the force structure in the best possible position with respect to the available a priori information, any planning approach that assumes the adversary will behave exactly as expected is naïve, and will rarely provide truly optimal performance. Therefore, during the adjustment phase (concurrent with execution), Stadium updates the plan to reflect the adversary’s observed behavior and improves overall performance real time. In this way, the accuracy of the scenario picture is continually improving based on the most recent sensor data. Together, the three phases of Stadium place the warfighter in the most advantageous position, affording the most efficient response, and provides the most accurate picture.

Vapor

Vapor

System-On Module Capable Software Suite

Vapor is an advanced, open-architecture, System-On Module (SOM) - capable software suite that accepts diverse sensor inputs, produces fire control quality target states, and interfaces seamlessly with disparate weapon fire control systems. Vapor enhances the capability of close-range fire control and weapon systems by providing an open-architecture solution that accepts diverse sensor measurements, translates measurements to a common protocol, performs effective correlation and fusion, automatically prioritizes threats (including man-in-the-loop input), and provides standardized outputs to legacy and future fire control computers. VAPOR provides automated evaluation of solution efficacy given weapon dispersion qualities. VAPOR is applicable for platforms including hand-held, indirect fire, and vehicle-mounted weaponry.

Vapor addresses the fundamental challenges of open-architecture system by exploiting:

  1. A low-latency, real-time architecture that facilitates auto-discovery and component registration of all contributing sensors
  2. A library of interface translation routines to connect incoming sensor measurements with the SOM internal functions
  3. An advanced track management function that performs smoothing, correlation, triangulation and filtering for precision target state estimation
  4. An advanced threat prioritization scheme that accepts user inputs and uses decision-level logic for ranking threats
  5. A MOSA based Framework that provides simulation and field testing capabilities in order to evaluate the effectiveness of target localization for multiple fire control systems.

Contact Us

For details call 256-763-8781, email randy.riley@archarithms.com or fill out the form below.