+86-755-28171273
Home / Knowledge / Details

Nov 20, 2021

The development and security of American military artificial intelligence

In early 2020, the U.S. Navy released "Artificial Intelligence Technology Security." The report focuses on the security implications of the technology. The U.S. Navy, and the Defense Department as a whole, is taking the development of military AI seriously. In February, June and September 2019, the United States released three major strategies, namely, the Artificial Intelligence Strategy of the Department of Defense, the National Artificial Intelligence Strategy and the Artificial intelligence Strategy of the Air Force, indicating the full launch of its "intelligent strategy" at the national, military and military levels. It can be seen that the DEVELOPMENT of ARTIFICIAL intelligence in the military field in the United States is becoming increasingly fierce.



First, launch a number of government decrees strategic planning, focus on the development of artificial intelligence


As an important engine driving the fourth industrial Revolution, ARTIFICIAL intelligence has a profound impact on the development of economic industries and various technical disciplines. Therefore, the United States, with its national strategic position, has promoted the momentum of ARTIFICIAL intelligence in various fields of social development (especially in the field of national defense), so as to promote the research and development of artificial intelligence technology. In October 2019, the World Economic Forum issued a white paper on the framework for national AI strategies, which created a minimum feasible framework for national AI strategies. It pointed out that the formulation of national AI strategies should consider strategic priorities, population needs, resource constraints, geopolitics and other factors. Designed to guide governments that have not yet or are developing national strategies for AI.


The United States continues to regard the development of AI technology as a major strategy to enhance national strength and safeguard national security, and strengthen the layout of AI technology at the national strategic level. In February 2019, the United States government science and technology policy issued by the office issued by the President of the United States trump "sustain America's leadership in the field of artificial intelligence, executive order, put forward the development of policies and principles of artificial intelligence strategic objectives and key areas, aiming to promote the leadership of the United States in the field of artificial intelligence ai initiative, Directed the federal government to pool its resources and focus on artificial intelligence.


In February of the same year, the U.S. Department of Defense released a summary of its 2018 Department of Defense AI Strategy (titled "Leveraging AI for Security and Prosperity"). The strategy is the first ARTIFICIAL intelligence strategy of the US Department of Defense. It aims to implement the important ai issues outlined in the US government's National Security Strategy and National Defense Strategy, and provide strategic guidance for the US Department of Defense to seek military ARTIFICIAL intelligence advantages and develop military ARTIFICIAL intelligence capabilities. In July 2019, the AIR Force launched the Digital Air Force Program, which aims to address deficiencies in its data management information technology architecture and business operations to keep the Air Force competitive. In September 2019, the U.S. Department of Energy established the Office of Artificial Intelligence and Technology to provide federal data models and high-performance computing resources to U.S. ARTIFICIAL intelligence researchers. In September 2019, the AIR Force released the 2019 Air Force ARTIFICIAL Intelligence Strategy as an appendix to the U.S. Department of Defense ARTIFICIAL Intelligence Strategy, detailing the basic principles, functions and objectives necessary to effectively manage guidance and leadership in the digital age. At the beginning of 2020, the CENTER for Naval Analysis released a special report entitled "Artificial Intelligence Technology Security -- Recommended Action Plans for the Navy." The report starts with the introduction of the public concern caused by the US Navy's promotion of the application of artificial intelligence technology in the military field, and puts forward the overall attitude and thinking of the Navy and even the entire Defense system to adopt the emerging technology in the military field.


Second, several military institutions carry out RESEARCH and development projects to explore new military scenarios of ARTIFICIAL intelligence technology


As a major military power, the United States has a very clear goal for the military operation empowerment of ARTIFICIAL intelligence, which is to vigorously promote the top AI research in the United States to new technological breakthroughs, promote the discovery of new scientific achievements, enhance economic competitiveness, and consolidate national security. In March 2019, the senate armed services committee held a theme for the defense department artificial intelligence planning hearings, America's defence advance research projects agency, DARPA, the defense innovation team (DIU), defense combined artificial intelligence center (JAIC) respectively, director of department of artificial intelligence programs and operation mechanism, etc., It consolidated and strengthened the connection between ai technology and application and the military, and ensured that the pace of AI militarization in the United States was further accelerated. DARPA, for example, is shifting its investment and research and development focus to third-generation ARTIFICIAL intelligence technology to create machines that can reason in context. Major projects funded include lifelong Learning Machines (L2M, launched in 2017), interpretable artificial intelligence (XAI, launched in 2018) and Common Sense Machines (MCS, launched in 2018) to explore ways to improve artificial intelligence technology and achieve contextual reasoning capabilities. DARPA believes that integrating these technologies into military systems that work with military warfighters will help make timely decisions in complex space-sensitive battlefield environments, understand vast amounts of incomplete or contradictory information, and use unmanned systems to safely and autonomously perform critical missions.


In January 2019, DARPA launched the Knowledge-based Reasoning Models for Artificial Intelligence (KAIROS) program, which aims to improve the ability to mine and understand complex events and their relationships in vast amounts of information for complex battlefield environments. In January 2019, the U.S. Army Research Laboratory (ARL) launched the Distributed Processing in Heterogeneous Tactical Environments (DPHTE) program, a fog-based computing platform that provides warfighters with increased situational awareness in adversarial military environments. In February 2019, the Air Force Research Laboratory released the Multidomain Warfare and Target Targeting Support Information Analysis Program, which aims to develop algorithmic warfare and artificial intelligence based technologies for rapid prediction and strike against time-sensitive and valuable hostile moving targets. In May 2019, DARPA launched its Air Combat Evolution (ACE) program to apply ARTIFICIAL intelligence (AI) to replace pilots in some air combat missions. In May 2019, MIT announced an ARTIFICIAL intelligence Accelerator program for the U.S. Air Force. The program focuses on disaster relief and medical preparedness, data management, maintenance logistics, vehicle safety, and network resilience. In September 2019, the JOINT Artificial Intelligence Center of the U.S. Department of Defense announced a new framework for U.S. military cyber security data, focusing on laying the foundation for future ARTIFICIAL intelligence cyber defense systems.


At the beginning of 2020, the Trump administration submitted a budget request for fiscal year 2021 to the Congress to accelerate the development of artificial intelligence and other technologies. The proposed government budget would be cut by $13.8 billion to $142.2 billion from $156 billion in fiscal 2020, but the budget request still emphasizes prioritization of "industries of the future" and the need to accelerate the development of technologies such as artificial intelligence. Of that, $5 million will go to the Department of Energy's new Office of Artificial Intelligence and Technology to strengthen ai programs.


Third, consolidate the moral criterion and safety boundary of artificial intelligence practice and application


With the development of artificial intelligence technology, human rights ethics, privacy protection, discrimination, security and other dilemmas have become increasingly prominent. The United States is also exploring a combination of measures to ensure the development of AI under adequate supervision and control. In particular, in the national AI Strategy released in 2019 and the AI technology security report released in early 2020, issues such as ethics, privacy and security have been highlighted, and it is believed that social benefits should be maximized under the premise of respecting ethics and paying attention to security.


1. Clarifying the ethical principles and standards for the use of AI technology in warfare


The United States promotes research that lays out the vision and guiding principles for the legitimate and ethical use of AI in the United States and guides responsible AI application and development. In January 2019, the US Department of Defense asked the Defense Innovation Board to develop ethical principles for the use of AI in warfare to guide the military's use of AI technology and weapons in warfare, and to confirm with Silicon Valley tech companies how their AI products will be used. The pentagon initiative is thought to be aimed at forming guidelines for global military AI norms and attracting Silicon Valley tech companies to its defense efforts; In October, THE PRINCIPLES of ARTIFICIAL Intelligence: Several recommendations on the ethics of AI application by the Department of Defense were released, which is considered the first U.S. response to the ethical issues raised by the use of military AI. In January 2019, the Brookings Institution, a renowned US think tank, published a report entitled automation and ARTIFICIAL Intelligence: The Impact of Machines on People and Regions. It analyzed the impact of funding systems and ARTIFICIAL intelligence on industries, employment, geography and population over the past 30 years, and predicted trends from now to 2030. Finally, a comprehensive response framework for national, state and local policy makers is proposed to provide reference for people to understand and regulate the role of automation and artificial intelligence.


(ii) Artificial intelligence is an emerging technology in the military field, and its security cannot be ignored


Human history is littered with examples of armies using technology to achieve military superiority. Like chariots. Chariot, is the first vehicle equipment appeared in the battlefield, from the civilian general carriage to improve the speed and mobility of the improvement, in the military application has achieved significant advantages. The chariot was described as the "superweapon" of its time. Or gunpowder. Gunpowder was born out of an accidental discovery that revolutionized the shape and style of warfare by enabling armies to harness the energy of chemical reactions to increase speed and power. Or the internal combustion engine. This engine inherited and developed the advantages of the steam engine, changing the speed and scope of war activities. Applications for the internal combustion engine include powering logistics (supply trucks) and providing sustained long-range surveillance and strike capabilities for submarines, aircraft and missiles.


The adoption and use of most technologies has at one time or another changed the pattern of warfare. Several of them, including gunpowder and nuclear weapons, revolutionized the pattern and scope of previous warfare. Artificial intelligence technology is also considered in this category. This technology can be applied to all aspects of the war, greatly improving the effectiveness and efficiency of war activities. The various ai technologies are also different because of their unique characteristics. First of all, it should be noted that the application of ARTIFICIAL intelligence technology in the real world is a narrow ai technology to solve problems in a specific field, rather than a general AI technology with general applicability. The use of AI in the military can be compared to the way the US military uses nuclear weapons: knowledge in key technical areas of security must be largely held by military civilians and is largely a technical domain.


(3) Give "just enough" trust in the safety of ai technology


The security of artificial intelligence technology is also related to the degree of trust in it. A key question in the military's use of AI is whether military personnel and senior U.S. government leaders can be confident that these systems work and won't cause unexpected problems. "Individuals who decide to deploy a system for a particular mission must trust the system," said a 2016 national Defense Science Commission study on autonomous control technology. Operations in Iraq and Afghanistan have shown that commanders and combatant/operators charged with carrying out specific operations do not necessarily use certain systems without fully understanding the consequences. When some systems to meet the urgent needs were deployed to battle (such as an improvised explosive device system or used to provide critical intelligence surveillance system), some units still chose they are already familiar with weapon system and intelligence, surveillance and reconnaissance platforms, even if is the function of the old system index than those already can choose the new system.


There is a danger that too little trust in AI systems will prevent troops from performing the functions they need. The other danger is undue faith in an ability. Humans tend to place too much trust in machines, even when there is evidence that this level of trust cannot be given. There are also specific cases of excessive trust in war activities. In 2003, for example, the Army patriot air defense missile system that shot down A Navy F/A-18 aircraft mistakenly identified the aircraft as A tactical ballistic missile and advised the operator to fire the missile to intercept it. This recommendation was approved by the operator without independent verification of the available information. This suggests that in actual combat operations, the military needs to place "just the right level" of trust in AI, without getting too hot or too cold, to avoid sliding to both extremes. The right level of trust needs to be achieved, and people should be involved in the decision-making process. This decision making process relies on a variety of relevant competencies and experience and knowledge of system functionality.


(4) Military AI security issues will be included in the policy guidelines


Senior military and government leaders should also influence the nature of military operations through policy decisions, including determining which specific technologies should be used in warfare. These policies may affect the degree of supervision. Dod Directive 3000.09, for example, requires a high-level review of certain types of autonomous control systems. Define the permissible technologies used in warfare (e.g., restrictions on the use of white phosphorus and set requirements for the use of cluster munitions and specific performance parameters of other such weapons) and limit the principles of strategy used in certain types of operations. For example, the 2013 Presidential Policy Guidance (PPG) and the 2017 Presidential Policy Guidance set out a general framework of principles for the approval and oversight process for certain counterterrorism operations. These policy guidelines help ensure that military activities are consistent with THE principles, values, and interests of the United States. These decisions at the policy level are meant to reflect the level of trust that should be held in the reliability of such systems or operations.


It's worth noting that all of these examples concern security principles and that dod Directive 3000.09 is designed to avoid "accidental engagement" (such as civilian casualties). Restrictions on white phosphorus and cluster munitions are also aimed at reducing the risk to civilians when these weapons are used. In the 2013 presidential Policy Guidance, the "no-gocriterion" directly applies to the approval process for combat operations. Therefore, it can be predicted that security issues will be part of the specific guidance and instructions of future senior leaders on the application of ARTIFICIAL intelligence techniques to war activities.


(e) The military should cooperate with the industry to resolve safety issues


Great advances in artificial intelligence technology have also created a new dependence on industry for the US government. Since World War II, the U.S. government has relied heavily on its own research and development dollars. However, r&d investment in AI technology is increasingly led by the private sector. It has been marked by a sharp increase in r&d spending in the technology sector over the past decade. In Figure 1, we compare the total U.S. government spending on web and INFORMATION technology R&D with the r&d investments of the top five U.S. tech companies (Amazon, Google/Alpha Holdings, Intel, Microsoft, And Apple). As Figure 1 shows, companies in the tech sector are spending significantly more on R&D, and the gap is widening. In 2010, tech companies spent six times as much on r&d as the US government as a whole. Eight years from now, corporate spending will have ballooned to 15 times that of the US government. Overall, the U.S. government faces a rapidly growing gap in its investment in cutting-edge technology research. This creates an environment of constant change for the US government. In this environment, collaboration with industry is critical to achieving the technological edge that the U.S. government must maintain in order to achieve its strategic goals. In this sense, artificial intelligence technology security should be a concern of the industry, as companies such as Google gave up support for the U.S. government military field application solutions, and start using the ethical review process to its internal workflow monitoring can attest, the government must work with the industry, relying on the help to solve such problems.




Figure 1 Comparison of r&d investment gap between the US government and enterprises in the technology industry


Fourth, the future development trend of American military ARTIFICIAL intelligence


Artificial intelligence technology can be generally divided into weak artificial intelligence, strong artificial intelligence, super artificial intelligence three levels, it is estimated that strong artificial intelligence technology may come out before 2050. In the future, the intelligent construction of the US army may go through three stages:


Before 2025, the US military will focus on building an intelligent military framework, and the overall level will be in the stage of weak artificial intelligence. American construction mainly how to construct the "global monitoring combat system", in order to upgrade the underwater, net electricity, air and space, the quick attack and missile defense combat system as the key point, give prominence to the development of unmanned, stealth and remote into draw machine, improving the capacity of "global public domain" intervention, ensure credible "denial" and "punish" deterrent. At this stage, the number of UNMANNED systems will gradually exceed that of manned systems, and autonomous unmanned systems will become an important force in the front operations of the US army. Invisible, unmanned, invisible, dexterous and other forces will become the main means of US military intervention.


By 2035, the US will have initially completed its intelligent combat system, and its overall level will enter the stage of strong artificial intelligence. The US military is developing intelligent combat platforms, information systems and decision support systems, as well as new weapons such as directed energy, hypersonic, bionic, genetic and nano weapons to create a new military "gap" against major adversaries. At this stage, the investment of unmanned system will exceed that of manned system, and the scale of unmanned system construction and operational application will be dominant.


By 2050, the US military's intelligent combat system will be more advanced and improved, and its overall level will reach the stage of super artificial intelligence. The US military is likely to make breakthroughs in technologies such as strong artificial intelligence, universal quantum computing, controlled nuclear fusion, nano robots, regeneration, biogenesis and brain networking. Combat platforms, information systems and command and control may be fully intelligent and unmanned, and a greater variety of new weapons such as bionic, genetic and nano weapons will be deployed on the battlefield. The battle space will be further expanded to the biological, nano and intelligent space, and mankind will enter the "era of robot warfare".


About Manly 


which located in Shenzhen,China. A leading Lithium battery manufacturer over 12 years ,widly used for Robostic industry,if there is any project need to evluate ,pls feel free tosend email to info@manlybatteries.com


Send Message