ITGS Syllabus

Saturday, December 02, 2006

Topic 176

Value of the development of AI as a field, for example, whether it is an appropriate place to put economic resources by Isaku

Artificial intelligence has been a publicly well known technology thanks to S.F novels by authors such as Jules Verne and Isaac Asimov. However, the sophisticated A.I as depicted in these novels still remains a scientific dream like flying cars. Still great effort is made by scientists all over the world to make this dream come true, since it is expected to benefit mankind in many fields.

Now the question is, which field is appropriate to develop this technology in. Presently the field that is said to come the closest to completing this technology is the military.
First, why would the military research artificial intelligence? This may not be the appropriate question to ask. The question should be why wouldn’t the military research artificial intelligence. Indeed the production of a proper artificial intelligence would mean a lot to the military. The type of artificial intelligence that the military has in mind has two types. The first one is in short a all powerful commanding office by itself.

By having this A.I, the military will have a lesser need for large commanding stations with operators, generals etc. Instead the A.I will be connected to the military’s network, and will be able to efficiently receive information, theoretically analyze it, then issue effective orders all by itself. If this system comes to use, the army will be able to operate with less human staff, ultimately resulting in the decrease in the military’s expenses.

The system will also be in theory more efficient than human staff since the different types of jobs are merged into one system, resulting in a more connective and fast system. Also all decisions the A.I make will be based on theoretical data, so it will make less mistakes than humans. The second type of A.I is an independent battlefield operational type. Long story short, it is a mechanical soldier, or a robot. The merit from this is of course the lives of millions of soldiers that will be saved by this technology.

These merits however, are only merits if you see them from the military’s point of view.


or

Value of the development of AI as a field, for example, whether it is an appropriate place to put economic resources by Chirag


Computer scientists like to view a program as an abstract specification of a machine, describable behaviorally in terms of the input/output relationship resulting from its computation. The machine's product is its output, representing the value of a function at the point represented by its input. Often we find it helpful to view this product at a higher level, say, as the solution to some well-posed problem. Inevitably, this problem bears on what we are to do, that is, some course of action to be embarked upon. (Conceptions of computation as answering questions are a relic of the era when human intermediaries were necessary to perform the transduction from computation to action.) In this view, the computer is a decision machine, where a decision is the resolution of a distinction among potential courses of action.

It is widely recognized that many of the problem-solving techniques developed in AI research (e.g., so-called classical planning) need to be generalized to accommodate uncertainty and graded preferences. Work in decision-theoretic planning (Hanks et al., 1994) is beginning to address these problems, adopting a more comprehensive framework for principled resource allocation while attempting to retain useful computational and representational techniques from prior AI work.

Most of microeconomic theory assumes that individual agents are rational --acting so as to achieve their most preferred outcome, subject to their knowledge and capabilities. Indeed, this rationality abstraction is perhaps the single methodological feature that most distinguishes economics from the other social sciences.

This approach is highly congruent with much work in Artificial Intelligence. About fifteen years ago, Newell (1982) proposed that a central characteristic of AI practice is a particular abstraction level at which we interpret the behavior of computing machines. Viewing a system at Newell's knowledge level entails attributing to the system knowledge, goals, and available actions, and predicting its behavior based on a principle of rationality that specifies how these elements dictate action selection. Rationality as applied here is a matter of coherence, defining a relation in which the knowledge, goals, and actions must stand. This is exactly the Bayesian view of rationality (standard in economics), in which knowledge and goals (or beliefs and preferences) are subjective notions, constrained only by self-coherence (consistency) and coherence with resulting behavior.

In human societies, computational power is inherently distributed across many relatively small brains resident in separate skulls, connected by costly, low-bandwidth, error-prone communication channels. Moreover, authority over activity is separately controlled by the local computational units. It is therefore not surprising that economics focuses on the decentralized nature of decision making. A primary aim of the discipline is to explain the aggregate results of alternate configurations of interacting rational agents.

The case for decentralization in computational environments, where communication is usually more direct and configurations more controllable, is less straightforward. Nevertheless, a variety of technological and other factors are leading to computational environments that are increasingly distributed. At this writing, the development and promotion of "software agents" (not necessarily derived from AI technology) is a prominent activity. Although interpretations of software agency vary widely, typical conceptions involve autonomy of action, modularity of scope and interest, and interaction with other agents. Understanding and influencing configurations of software agents is directly analogous to the problem faced by economists.

It is not possible in this short position paper to survey the large body of work on probabilistic reasoning, decision-theoretic planning, game-theoretic analysis of multiage systems, etc., that has made its way into AI over the last ten years. Suffice it to say that the field has been far more open in the previous decade to ideas that could be broadly characterized as economic. That these ideas have had significant impact in particular subfields is reflected in the ubiquity of concepts of resource allocation and rationality in the recent AI textbook of Russell and Norvig (1995).

This is not a surprise. As I have attempted to point out, the goals of AI and those of economics overlap substantially, and are analogous in many of the non-overlapping regions. AI is the branch of computer science that is concerned with the substance of behavior, and with deriving general principles for designing deciding agents. In so doing, AI unapologetically invokes rationality concepts, and aims to render the rationality abstraction an operationally viable approximation. When activity is decentralized, AI considers interactions in social terms.

The point of all this is not, of course, to suggest that economics has all the answers to AI problems. But recognizing that AI's problems are in large part economic does help us to formulate the questions, and opens to us a variety of concepts and techniques that offer a starting point on potential solutions. Success in AI would mean an account of the economics of computation, and one way toward this goal starts with some computation of economics.

1 Comments:

Blogger Romeo Wu said...

348 words in total

April 02, 2007 3:08 PM  

Post a Comment

<< Home