Attachment 'AIGlossary.tex'

Download

   1 \documentclass{article}
   2 \usepackage{amsmath}%
   3 \setcounter{MaxMatrixCols}{30}%
   4 \usepackage{amsfonts}%
   5 \usepackage{amssymb}%
   6 \usepackage{graphicx}
   7 \usepackage{geometry}
   8 \newtheorem{theorem}{Theorem}
   9 \newtheorem{acknowledgement}[theorem]{Acknowledgement}
  10 \newtheorem{algorithm}[theorem]{Algorithm}
  11 \newtheorem{axiom}[theorem]{Axiom}
  12 \newtheorem{case}[theorem]{Case}
  13 \newtheorem{claim}[theorem]{Claim}
  14 \newtheorem{conclusion}[theorem]{Conclusion}
  15 \newtheorem{condition}[theorem]{Condition}
  16 \newtheorem{conjecture}[theorem]{Conjecture}
  17 \newtheorem{corollary}[theorem]{Corollary}
  18 \newtheorem{criterion}[theorem]{Criterion}
  19 \newtheorem{definition}[theorem]{Definition}
  20 \newtheorem{example}[theorem]{Example}
  21 \newtheorem{exercise}[theorem]{Exercise}
  22 \newtheorem{lemma}[theorem]{Lemma}
  23 \newtheorem{notation}[theorem]{Notation}
  24 \newtheorem{problem}[theorem]{Problem}
  25 \newtheorem{proposition}[theorem]{Proposition}
  26 \newtheorem{remark}[theorem]{Remark}
  27 \newtheorem{solution}[theorem]{Solution}
  28 \newtheorem{summary}[theorem]{Summary}
  29 \newenvironment{proof}[1][Proof]{\noindent\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
  30 \geometry{left=0.5in,right=0.5in,top=0.5in,bottom=0.5in}
  31 
  32 \begin{document}
  33 	
  34 \section{These are all the glosary terms from my AI class Csce876 from UNL}
  35 
  36 
  37 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
  38 
  39 $A^*$ Search  &  The most widely-known form of the best-first
  40 search, $A^*$ evaluates nodes using the evaluation function: \[ f(n)
  41 = g(n) + h(n) \] $A^*$ has several nice properties. Using the
  42 Tree-Search Algorithm and an {\em admissible} heuristic function we
  43 have: \begin{itemize} \item Optimality \item Complete \item
  44 Optimally efficient \end{itemize} If the heuristic function is
  45 consistent, then we have: \begin{itemize} \item Optimality \item
  46 Complete \end{itemize} \\ \hline
  47 
  48 $C^*$  &  This is the actual cost of the optimal solution path. \\
  49 \hline
  50 
  51 $f(n)$ (Evaluation Function)  &  The evaluation function is "the"
  52 problem-specific knowledge beyond the definition of the problem
  53 itself that is used to determine which node is chosen for expansion
  54 in the search process. \\ \hline
  55 
  56 $g(n)$ (Cost to node function)  &  This function gives the actual
  57 cost to reach node $n$ on the current path. \\ \hline
  58 
  59 $h(n)$ (Heuristic function)  &  The estimated cost of the cheape4st
  60 path from $n$ to a (the) goal. \\ \hline
  61 
  62 $k-$consistency  &  A CSP is $k-$consistent if, for any set of $k-1$
  63 variables and for any consistent assignment to those variables, a
  64 consistent value can always be assigned to any $k^{th}$ variable.
  65 CITATION(aitextbook) (p 147) \\ \hline
  66 
  67 $n$-queens problem  &  The the goal of the $n-queens$ problem is to
  68 place $n$ queens on an $n \times n$ chess board where no queen is in
  69 a position to attack another queen. \\ \hline
  70 
  71 Abstraction (definition, goal, example)  &  The process of removing
  72 detail from a representation is called abstraction. We say the
  73 abstraction is valid if we can expand any abstract solution into a
  74 solution in the more detailed world. \\ \hline
  75 
  76 Admissible Heuristic  &  $h(n)$ is an admissible heuristic provided
  77 that $h(n)$ never overestimates the cost to reach the goal. The
  78 direct consequence of this is that $f(n)$ for $A^*$ never
  79 overestimates the true cost of a solution through $n$. \\ \hline
  80 
  81 Agent  &  An agent is just something that acts CITATION(aitextbook) \\
  82 \hline
  83 
  84 Alldiff constraint  &  A constraint that indicate that all involved
  85 variables must be assigned a different value. CITATION(aitextbook)
  86 (p 140) \\ \hline
  87 
  88 Alliances  &  Alliances occur in many games involving more than two
  89 players. Alliances involve a group of players that are cooperating
  90 in a competitive environment. CITATION(aitextbook) (p 166) \\ \hline
  91 
  92 Alpha-beta pruning  &  This procedure is based on a DFS. Consider a
  93 node $n$ somewhere in the tree such that Player has a choice of
  94 moving to that node. If Player has a better choice $m$ either at the
  95 parent node of $n$ or at any choice point further up, then $n$ {\em
  96 will never be reached in actual play}. So once we have found out
  97 enough about $n$ (by examining its descendants in a DFS) to reach
  98 this conclusion, we can prune it. Alpha-beta pruning gets its name
  99 from the two parameters: \begin{description} \item[$\alpha =$] The
 100 value of the best choice we have found so far at any choice point
 101 along the path for $\max$. \item[$\beta =$] The value of the best
 102 choice we have found so far at any choice point along the path for
 103 $\min$. \end{description} CITATION(aitextbook) (p~168-169) \\ \hline
 104 
 105 And-elimination  &  $\frac{A \wedge B}{A}$ CITATION(aitextbook) (p
 106 211)
 107 \\ \hline
 108 
 109 Anytime algorithm  &  An anytime algorithm is an algorithm whose
 110 output quality improves gradually over time, so that it has a
 111 reasonable decision ready whenever it is interrupted.
 112 CITATION(aitextbook) (p 971). \\ \hline
 113 
 114 \end{tabular}
 115 
 116 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
 117 
 118 Arc consistency  &  An arc from $X$ to $Y$ is consistent if, for
 119 every value $x$ of $X$ there is some value $y$ of $Y$ that is
 120 consistent with $x$. CITATION(aitextbook) (p 145) \\ \hline
 121 
 122 Arity (of a relation or a function)  &  Arity is the number of
 123 arguments CITATION(aitextbook) (p 246) \\ \hline
 124 
 125 Assertion  &  A FOL logic sentence added to the KB are called an
 126 assertion. CITATION(aitextbook) (p 253) \\ \hline
 127 
 128 Atmost constraint  &  The atmost constraint has a number $P$ and
 129 variables $P_1,...,P_k$ that are assigned numbers such that
 130 $\sum_{i=1}^{k} P_i \leq P$. CITATION(aitextbook) (p 148) \\ \hline
 131 
 132 Atomic sentences  &  Consists of a single proposition symbol.
 133 CITATION(aitextbook) (p 204) \\ \hline
 134 
 135 Autonomy (autonomous agent)  &  Not relying on prior knowledge of
 136 its designer, but rather on its own precepts. CITATION(aitextbook) \\
 137 \hline
 138 
 139 Axiom  &  Axioms provide the basic factual information form which
 140 conclusions can be derived. CITATION(aitextbook) (p 255) \\ \hline
 141 
 142 Background knowledge  &  This is the sentences of information that a
 143 knowledge base is initialized with. CITATION(aitextbook) (p 196) \\
 144 \hline
 145 
 146 Backjumping  &  The backjumping method backtracks to the most recent
 147 variable in the conflict set. CITATION(aitextbook) (p 149) \\ \hline
 148 
 149 Backtracking Search  &  A variant of the depth-first search it uses
 150 still less memory. In backtracking, only on successor is generated
 151 at a time rather than all successors; each partially expanded node
 152 remembers which successor to generate next. Thus the memory
 153 requirement is only $O(m)$ instead of $O(bm)$. \\ \hline
 154 
 155 Backtracking search  &  The backtracking search is a depth-first
 156 search that chooses values for one variable at a time and backtracks
 157 when a variable has no legal values left to assign.
 158 CITATION(aitextbook) (p 141) \\ \hline
 159 
 160 Backward chaining  &  Backward chaining starts with the query for
 161 the fact $q$ and finds those implications in the knowledge base that
 162 conclude $q$. If all of the premises of one of these implications
 163 can be proved true (through backward chaining), then $q$ is true.
 164 CITATION(aitextbook) (p 220) \\ \hline
 165 
 166 Belief State  &  When in a sensorless problem, the agent may only
 167 know what states it can possibly start in. By applying the successor
 168 function to each of these states it can determine a set of states
 169 that could be reached by a specific action. The set of states that
 170 the agent could be in at any one time is the {\em belief state}. \\
 171 \hline
 172 
 173 Best-first Search  &  Best-first search is an instance of the
 174 general Tree-Search or Graph-Search algorithm in which a node is
 175 selected for expansion based on an evaluation function, $f(n)$. \\
 176 \hline
 177 
 178 Biconditional  &  $A \Leftrightarrow B$ is a biconditional that
 179 states that $A$ is true if and only if $B$ is true or alternately if
 180 $A \Rightarrow B$ and $B \Rightarrow A$. CITATION(aitextbook) (p
 181 205)
 182 \\ \hline
 183 
 184 Bidirectional search  &  The strategy is to run two simultaneous
 185 searches, one from the initial state forward and one from the goal
 186 state backwards. This is not always possible, and requires that the
 187 successor function be invertible. This should cut the search depth
 188 by a factor of two so that the storage requirements are $O(b^{d/2})
 189 << b^d$. \\ \hline
 190 
 191 Binary constraint  &  A binary constraint relates two variables.
 192 CITATION(aitextbook) (p 140) \\ \hline
 193 
 194 Binding list  &  When posing a query $\exists x~Predicate(x)$,
 195 answering yes is not very helpful. The standard form of an answer to
 196 such a query includes a set of values that satisfy the query. This
 197 set is called a substitution or binding list. CITATION(aitextbook)
 198 (p 254) \\ \hline
 199 
 200 Blind Search  &  See Uninformed search \\ \hline
 201 
 202 Body (of a Horn clause)  &  The negative literals of a horn clause
 203 form the body. CITATION(aitextbook) (p 218) \\ \hline
 204 
 205 Boolean CSPs  &  Boolean CSPs are CSPs where the domains of the
 206 variables are either true or false. CITATION(aitextbook) (p 139) \\
 207 \hline
 208 
 209 Bounded differences (constraint of)  &  A bounded difference
 210 constraint has the form of $const_1 ~~ \theta ~~ x-y ~~ \theta ~~
 211 const_2$ where $\theta \in {<,>,\leq,\geq}$. CITATION(ainotes) \\
 212 \hline
 213 
 214 Bounds Propagation  &  We say that a CSP is bounds-consistent if for
 215 every variable $X$, and for both the lower bound and upper bound
 216 values of $X$, there exists some value of $Y$ that satisfies the
 217 constraint between $X$ and $Y$, for every variable $Y$. This kind of
 218 constraint propagation is called {\em bounds propagation}.
 219 CITATION(aitextbook) (p 148) \\ \hline
 220 
 221 Branching Factor  &  The {\em branching factor} is the maximum
 222 number of successors of any node. \\ \hline
 223 
 224 Breadth-first Search  &  The BFS expands nodes from a FIFO queue.
 225 That is, as each node is discovered it is put into a FIFO queue.
 226 This results in all nodes being expanded at each level before a new
 227 level is expanded. The storage requirement is $O(b^{d+1})$. This
 228 requirement is prohibitive. \\ \hline
 229 
 230 \end{tabular}
 231 
 232 
 233 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
 234 
 235 Chinese room experiment  &  The system consists of a human, who
 236 understands only English (CPU), equipped with a rule book (PROGRAM),
 237 written in English, and various stacks of paper, some blank, some
 238 with indecipherable inscriptions (memory). The system is inside a
 239 room with a small opening to the outside. Through the opening appear
 240 slips of paper with indecipherable symbols. The human finds matching
 241 symbols in the rule book, and follows the instructions. The
 242 instructions may include writing symbols on new slips of paper,
 243 finding symbols in the stacks, rearranging the stacks and so on.
 244 Eventually, the instructions will cause one or more symbols to be
 245 transcribed onto a piece of paper that is passed back to the outside
 246 world with the correct response. Searle argues against the Turing
 247 test in that just running the right program does not generate
 248 understanding. CITATION(aitextbook) \\ \hline
 249 
 250 Chronological backtracking  &  The process of a backtrack search
 251 where we backup to the preceding variable and try a different value
 252 is called chronological backtracking, because the most recent
 253 decision point is revisited. CITATION(aitextbook) (p 148) \\ \hline
 254 
 255 Coercion  &  Coercion is the ability of an agent to reach a goal
 256 state even in a sensorless problem. The agent knows that it starts
 257 in any of the states but can reach a goal by reasoning about sets of
 258 states. See the Vacuum cleaner agent on p84 for a good example. \\
 259 \hline
 260 
 261 Cognitive science  &  The interdisciplinary field of cognitive
 262 science brings together computer models from AI and experimental
 263 techniques from psychology to try to construct precise and testable
 264 theories of the workings of the human mind. CITATION(aitextbook) \\
 265 \hline
 266 
 267 Competitive Agent  &  In a multiagent system, when one's performance
 268 depends upon the performance of another in an inverse relationship,
 269 then the two agents are in competition and may be called competitive
 270 agents. \\ \hline
 271 
 272 Complementary literals  &  Literals such that one is a negation of
 273 the other. CITATION(aitextbook) (p 214) \\ \hline
 274 
 275 Complete-state formulation  &  A {\em complete-state formulation}
 276 starts with a complete proposed solution and then changes to it to
 277 find a solution that satisfies the goal test. \\ \hline
 278 
 279 Completeness  &  If the exists a solution, then the algorithm is
 280 guaranteed to find a solution. \\ \hline
 281 
 282 Completeness (of inference)  &  An inference $i$ is complete if
 283 whenever KB $\models \alpha$, it is also true that KB $\vdash_i
 284 \alpha$. That is, the procedure will answer any question whose
 285 answer follows from what is know by the KB. CITATION(ainotes)
 286 (10.23)
 287 \\ \hline
 288 
 289 Compositional (languages)  &  In a compositional language, the
 290 meaning of a sentence is a function of the meaning of its parts.
 291 CITATION(aitextbook) (p 241) \\ \hline
 292 
 293 Compositionality  &  In a compostional language, the meaning of a
 294 sentence is a function of the meaning of its parts.
 295 CITATION(aitextbook) (p 241) \\ \hline
 296 
 297 Condition-action rule  &  A condition action rule is a tuple
 298 (Condition, Action). When the condition is matched by the sensor
 299 input, the Simple Reflex agent will perform the "Action". \\ \hline
 300 
 301 Conflict set  &  A more intelligent approach to backtracking than
 302 chronological backtracking is to go all the way back to one of the
 303 set of variables that {\em caused the failure}. This set is called
 304 the {\bf conflict set}. CITATION(aitextbook) (p 148) \\ \hline
 305 
 306 Conflict-directed backjumping  &  The backtracking method that backs
 307 up to the source of a conflict that prohibits the finding of a
 308 consistent assignment to the remaining variables.
 309 CITATION(ainotes,aitextbook) (p 149) \\ \hline
 310 
 311 Conjunctive normal form  &  A sentence expressed as a conjunction of
 312 disjunctions of literals is said to be in conjunctive normal form.
 313 CITATION(aitextbook) (p 215) \\ \hline
 314 
 315 Connectionism  &  Connectionism models mental or behavioral
 316 phenomena as the emergent processes of interconnected networks of
 317 simple units. There are many different forms of connectionism, but
 318 the most common forms utilize neural network models.
 319 CITATION(Wikipedia:Connectionism) \\ \hline
 320 
 321 Consistency (as a property of the $h$ function)  &  A heuristic
 322 $h(n)$ is consistent if, for every node $n$ and every successor $n'$
 323 of $n$ generated by any action $a$, the estimated cost of reaching
 324 the goal from $n$ is no greater than the step cost of getting to
 325 $n'$ plust the estimated cost or reaching the goal from $n'$: \\
 326 \hline
 327 
 328 Consistent assignment  &  An assignment that does not violate any
 329 constraints is called a consistent or legal assignment.
 330 CITATION(aitextbook) (p 137) \\ \hline
 331 
 332 Constant symbol  &  A symbol in FOL that stand for objects.
 333 CITATION(aitextbook) (p 246) \\ \hline
 334 
 335 Constraint graph  &  A graph where nodes correspond to the variables
 336 of the problem and arcs correspond to the constraints.
 337 CITATION(aitextbook) (p 138) \\ \hline
 338 
 339 Constraint hypergraph  &  A constraint hypergraph contains two types
 340 of nodes: variables and constraints. arcs from constraint nodes to
 341 variable nodes indicate that the variable is involved in the
 342 constraint. This is used for representing constraints with an arity
 343 greater than 2. CITATION(ainotes) \\ \hline
 344 
 345 \end{tabular}
 346 
 347 
 348 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
 349 
 350 Constraint propagation  &  This is the general term for propagation
 351 the implications of a constraint on one variable onto other
 352 variables. CITATION(aitextbook) (p 145) \\ \hline
 353 
 354 Constraint scope  &  The scope of a constraint is the set of
 355 variables that are involved in the constraint CITATION(ainotes). \\
 356 \hline
 357 
 358 Constructive Search  &  This is the standard search process were we
 359 assign a value to a variable at each node starting from the first
 360 variable to the last. CITATION(ainotes). \\ \hline
 361 
 362 Contingency problem  &  If the environment is partially observable
 363 or if actions are uncertain, then the agent's percepts provide new
 364 information after each action. Each possible percept defines a {\em
 365 contingency} that must be planned for. A contingency problem is {\em
 366 adversarial} if the uncertainty is caused by the actions of other
 367 agents. \\ \hline
 368 
 369 Continuous domains  &  Where the variables require continuous values
 370 such as the real numbers we say we are using continuous domains.
 371 Problems that require precise values often require continuous
 372 domains. CITATION(aitextbook) (p 139) \\ \hline
 373 
 374 Contours  &  The fact that $f$-costs are nondecreasing along any
 375 path also means that we can draw contours in the state space, just
 376 like the contours in a topographic map. \\ \hline
 377 
 378 Crossover  &  In genetic algorithms when you cross two states you
 379 choose a point to split the states and "cross" the two states. State
 380 1's first half with state 2's second half, and similarly for the
 381 second state. The split point is called the "crossover" point. \\
 382 \hline
 383 
 384 Current State  &  Local search algorithms operate using a single
 385 "current state" and generally move only to neighboring states. The
 386 current state is then a complete description of a solution that may
 387 not meet all of the constraints or may not be optimal. (Of course it
 388 could be optimal too, but then it would be a solution) \\ \hline
 389 
 390 Cutset conditioning  &  Cutset conditioning is the process of
 391 finding the smallest cycle cutset. CITATION(aitextbook) (p 154) \\
 392 \hline
 393 
 394 Cycle cutset  &  A subset $S$ called the cycle cutset is a set of
 395 variables of a CSP such that the constraint graph becomes a tree
 396 after the removal of $S$ CITATION(aitextbook) (p 153). \\ \hline
 397 
 398 Data driven  &  Data driven reasoning is reasoning in which the
 399 focus of attention starts with the known data. CITATION(aitextbook)
 400 (p 219) \\ \hline
 401 
 402 Data Mining  &  Data mining, also known as knowledge-discovery in
 403 databases (KDD), is the practice of automatically searching large
 404 stores of data for patterns. CITATION(Wikipedia:Datamining) \\
 405 \hline
 406 
 407 Decision problem  &   Is any problem that must be answered yes or
 408 no. \\ \hline
 409 
 410 Deduction  &  Logical inference. That is to use the rules of logic
 411 to infer a conclusion. We say a conclusion is reached by deduction
 412 if it is logically inferred by the premise(s). \\ \hline
 413 
 414 Deduction theorem  &  For any sentences $\alpha$ and $\beta$,
 415 $\alpha \models \beta$ if and only if the sentence $(\alpha
 416 \Rightarrow \beta)$ is valid. CITATION(aitextbook) (p 210) \\ \hline
 417 
 418 Definite clause  &  Horn clauses with exactly one positive literal
 419 are called definite clauses. CITATION(aitextbook) (p 218) \\ \hline
 420 
 421 Degree (ordering heuristic)  &  Attempts to reduce the branching
 422 factor by selecting the variable that is involved in the largest
 423 number of constraints. CITATION(aitextbook) (p~144) \\ \hline
 424 
 425 Depth-First Search  &  This strategy expands nodes from a LIFO
 426 queue, also known as a stack. This results in nodes being expanded
 427 as deep as they can before exploring other nodes at the same level.
 428 Thus it is called the DFS. The space requirements for DFS are
 429 $O(bm)$. There is no guarantee that this will ever find a goal state
 430 because it may get caught in a loop. \\ \hline
 431 
 432 Depth-Limited Search  &  This is just a DFS search where we limit
 433 the depth that the search may descend to. It is also not guaranteed
 434 to find a solution in general, but we may guarantee a goal state is
 435 found if we know a goal exists within a certain depth. This has the
 436 same requirements as DFS \\ \hline
 437 
 438 Diameter (of a graph)  &  Given a graph $G=(V,E)$, the distance
 439 between two vertices $u$ and $v$ in $G$, denoted $d(u,v)$, is the
 440 length of the shortest $u-v$ path in $G$. The {\em diameter} of $G$,
 441 denoted diam$(G)$, is the maximum distance between any two vertices
 442 in $G$. CITATION(GraphTheoryThulasiramanSwamy) \\ \hline
 443 
 444 Domain (of a model)  &  The domain of a model is the set of objects
 445 that it contains. CITATION(ainotes) (p 245) \\ \hline
 446 
 447 Domain (of a variable)  &  Given a CSP the domain of a variable are
 448 the values that the variable may be assigned
 449 CITATION(ainotes,aitextbook) (p 137) \\ \hline
 450 
 451 Domain/degree heuristics (not in textbook)  &  The degree heuristic
 452 chooses the variable to assign that is involved in the largest
 453 number of constraints on other unassigned variables.
 454 CITATION(aitextbook) (p 144 - it is in the book) \\ \hline
 455 
 456 \end{tabular}
 457 
 458 
 459 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
 460 
 461 Domination (of two functions)  &  We say a heuristic function
 462 $h_1(n)$ dominates another heuristic function $h_2(n)$ if \[ h_1(n)
 463 \geq h_2(n) \] \\ \hline
 464 
 465 Effective branching factor CITATION(ainotes)  &  One way to
 466 characterize the quality of a heuristic is using the {\em effective
 467 branching factor}. In words this is the the branching factor that a
 468 uniform tree of depth $d$ would have in order to contain $N+1$
 469 nodes. Where $d$ is the depth of the search tree and $N$ is the
 470 number of nodes generated. That is at each depth starting at $0$ we
 471 have: \[ N = 1 + b^* + (b^*)^2 + ... + (b^*)^d =
 472 \frac{(b^*)^{d+1}-1}{b^* - 1} \] \\ \hline
 473 
 474 Entailment  &  Let KB be a knowledge base of sentences and let
 475 $\alpha$ be a sentence. $\alpha$ is entailed by a KB if the models
 476 of the KB are {\em all} models of $\alpha$. CITATION(ainotes) (11.5)
 477 For my own notes: Entailment is similar to implication except the
 478 domain of the variables are sentences (or propositions). KB $\models
 479 \alpha$ iff all models of KB are models of $\alpha$. ($i.e., M(KB)
 480 \subseteq M(\alpha)$) CITATION(ainotes) (11.5) \\ \hline
 481 
 482 Environment: Deterministic vs. Stochastic  &  If the next state of
 483 the environment is completely determined by the current state and
 484 the action executed by the agent, then we say the environment is
 485 deterministic, otherwise we say it is stochastic. \\ \hline
 486 
 487 Environment: Discrete vs. Continuous  &  The discrete/continuous
 488 distinction can be applied (1) to the state of the environment - is
 489 it constantly changing?, (2) to the way {\em time} is handled - is
 490 it continuous in the application or is it handled as discrete
 491 points?, and (3) to the {\em percepts} and {\em actions} of the
 492 agent - How can the agent perceive the environment, and how does it
 493 act (continuously or discretely)?. \\ \hline
 494 
 495 Environment: Episodic vs. Sequential  &  If the agents experience is
 496 broken up into atomic episodes that do not affect actions in
 497 previous episodes, we call it episodic. If on the other hand current
 498 actions depend upon the sequence of actions taken previously, then
 499 we call the task environment sequential. \\ \hline
 500 
 501 Environment: Observable: Fully vs Partially  &  A task environment
 502 is effectively fully observable if the sensors detect all aspects
 503 that are {\em relevant} to the choice of action. Otherwise we say
 504 the task environment is partially observable. \\ \hline
 505 
 506 Environment: Single agent vs. Multiagent  &  The key distinction is
 507 whether an agent's behavior is best described as maximizing a
 508 performance measure whose value depends on other agents' behavior.
 509 \\ \hline
 510 
 511 Environment: Static vs. Dynamic  &  If the environment can change
 512 while the agent is deliberating, we say the environment is
 513 \textbf{dynamic}. If the environment does not change while the agent
 514 is deliberating, we call the environment \textbf{static}. If the
 515 score changes (decreases) as the agent deliberates in a static
 516 environment, we call the environment \textbf{semidynamic}. \\ \hline
 517 
 518 Environment: Strategic  &  If the environment is deterministic
 519 except for the actions of other agents, we say that the environment
 520 is strategic. (This was under Deterministic vs Stochastic in the
 521 book.) \\ \hline
 522 
 523 Epistemological level  &  Abstract description of what the agent
 524 knows about the world. CITATION(ainotes) (10.4) \\ \hline
 525 
 526 Exploration Problem  &  When the states and actions of the
 527 environment are unknown (e.g. you are dropped somewhere where you
 528 don't speak the language) the agent must act to discover them. This
 529 type of problem can be viewed as an extreme case of the contingency
 530 problem. \\ \hline
 531 
 532 Extended interpretation  &  An extended interpretation specifies a
 533 domain element in addition to the mapping given by the
 534 interpretation. CITATION(aitextbook) (p 249). \\ \hline
 535 
 536 Extensively defined constraint (not in textbook)  &  Extensionally
 537 defined constraints: all allowed tuples are listed. This is
 538 practical for defining arbitrary constraints. CITATION(ainotes) \\
 539 \hline
 540 
 541 Factoring  &  Factoring is the last step in resolution and involves
 542 removing multiple copies of literals from the expression.
 543 CITATION(aitextbook) (p 214) \\ \hline
 544 
 545 Finite domains  &  Variables with finite (or discrete) domains are
 546 the simplest kind of CSPs. CITATION(aitextbook) (p 139) \\ \hline
 547 
 548 \end{tabular}
 549 
 550 
 551 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
 552 
 553 First-Order Logic  &  First-Order logic adds to propositional
 554 \begin{itemize} \item symbols: Objects, properties, functions and
 555 relations \item connectives: quantifiers $\exists$ and $\forall$
 556 \end{itemize} CITATION(ainotes) (12.4,5) \\ \hline
 557 
 558 Fitness function  &   In genetic algorithms each state produced in
 559 the next generation is evaluated or rated using a fitness function.
 560 This determines how "good" the produced state is. \\ \hline
 561 
 562 Fixed point  &  The point at which applying inference rules produces
 563 no new facts. CITATION(aitextbook) (p 218) \\ \hline
 564 
 565 Forward chaining  &  If all the premises of an implication are
 566 known, then its conclusion is added to the list of known facts. The
 567 process continues until the desired fact is known. This process of
 568 chaining horn clauses together is called forward chaining.
 569 CITATION(aitextbook) (p 218) \\ \hline
 570 
 571 Forward Checking  &  Whenever a variable $X$ is assigned, the
 572 forward checking process looks at each unassigned variable $Y$ that
 573 is connected to $X$ by a constraint and deletes from $Y$'s domain
 574 any value that is inconsistent with the value chosen for $X$.
 575 CITATION(aitextbook) (p 144) \\ \hline
 576 
 577 Fringe  &  The collection of nodes that have been generated but not
 578 yet expanded is called the fringe (p 70). \\ \hline
 579 
 580 Function  &  In terms of FOL, a function is a relation in which
 581 there is only one "value" for a given "input." CITATION(aitextbook)
 582 (p 242) \\ \hline
 583 
 584 Function (mathematical definition)  &  A relation $f:A \rightarrow
 585 B$ is said to be a function if and only if $f$ maps each $a \in A$
 586 to only one value in $B$. (Dr. Anderson) \\ \hline
 587 
 588 Genetic algorithm  &  A genetic algorithm is a variant of stochastic
 589 beam search in which successor states are generated by combining two
 590 parent states. \\ \hline
 591 
 592 Global constraint  &  Involves all the variables in a CSP.
 593 CITATION(ainotes) \\ \hline
 594 
 595 Global optimum (maximum or minimum)  &  If elevation in a state
 596 space landscape corresponds to cost, then the aim is to find the
 597 lowest valley (minimum). If the landscape corresponds to an
 598 objective function, then the aim is to find the highest peak. \\
 599 \hline
 600 
 601 Goal Test  &  The goal test determines whether a given state is a
 602 goal state. \\ \hline
 603 
 604 Goal-directed reasoning  &  Goal directed reasoning is useful for
 605 answering specific questions such as "What shall I do now?" and
 606 "Where are my keys?" Backward chaining is an example of goal
 607 directed reasoning. CITATION(aitextbook) (p 220) \\ \hline
 608 
 609 Gradient ascent  &  The gradient defines the direction of the
 610 steepest ascent (in this case) or descent. (From calculus) \\ \hline
 611 
 612 Greedy Algorithm  &  A greedy algorithm always makes the choice that
 613 looks best at the moment. CITATION(IntroToAlgorithms) The Best-first
 614 search, also called the "greedy search" or "greedy best-first
 615 search", tries to expand the node that is closest to the goal, on
 616 the grounds that this is likely to lead to a solution quickly. Thus
 617 it evaluates nodes by using just the heuristic function \[ f(n) =
 618 h(n) \] \\ \hline
 619 
 620 Greedy local search  &  Hill climbing is sometimes called a greedy
 621 local search because it grabs a good neighbor state without thinking
 622 ahead about where to go next. \\ \hline
 623 
 624 Ground term  &  A term with no variables is called a ground term.
 625 CITATION(aitextbook) (p 249) \\ \hline
 626 
 627 Grounding  &  Grounding is the process of determining if the logical
 628 reasoning processes and the real environment in which the agent
 629 exists are consistent. The question is, "how do we know that KB is
 630 true in the real world?" Of course if you worried about this, you
 631 would never have time to worry about anything else.
 632 CITATION(aitextbook) (p 204) \\ \hline
 633 
 634 Hanhatten distance &  This is the sum distances to the object in
 635 each dimension. Sometimes called the city block distance because you
 636 can't cut through the buildings. \\ \hline
 637 
 638 Head (of a Horn clause)  &  The positive literal of a horn clause is
 639 called the head. CITATION(aitextbook) (p 205) \\ \hline
 640 
 641 Heuristic Function  &  See $h(n)$. \\ \hline
 642 
 643 Heuristic Search  &  See "Informed search" \\ \hline
 644 
 645 Higher-Order Logic  &  Higher-Order logic views the relations and
 646 functions in first-order logic as objects in themselves.
 647 CITATION(aitextbook) (p 244) \\ \hline
 648 
 649 Hill climbing: first choice  &  Implements stochastic hill climbing
 650 by generating successor randomly until one is generated that is
 651 better than the current state. \\ \hline
 652 
 653 \end{tabular}
 654 
 655 
 656 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
 657 
 658 Hill climbing: random restart  &  Because hill climbing will find it
 659 harder and harder to find a better state as time goes on, we may
 660 stop the search process and start it from a new random location.
 661 This is called random restart. \\ \hline
 662 
 663 Hill climbing: stochastic  &  Stochastic hill climbing chooses at
 664 random from among the uphill moves where the probability of
 665 selection can vary with the steepness of the uphill move. \\ \hline
 666 
 667 Horn clauses  &  A horn clause is a disjunction of literals of which
 668 at most one is positive. CITATION(aitextbook) (p 204) \\ \hline
 669 
 670 Incremental formulation  &  An incremental formulation involves
 671 operations that augment or {\em build} the state description. The
 672 goal is to build a state that satisfies the goal test and at any
 673 point in the process, if we find we can not satisfy the goal state
 674 using the last augmentation, we may back track.\\ \hline
 675 
 676 Induction  &  General rules aquired by exposure to repeated
 677 associations between their elements. CITATION(aitextbook) \\ \hline
 678 
 679 Inference  &  Deriving new sentences from old. In the case of
 680 logical agents, inference must obey the fundamental requirement that
 681 when one asks a question of the knowledge base, the answer should
 682 follow from what has been told the knowledge base previously.
 683 CITATION(aitextbook) (p 195) \\ \hline
 684 
 685 Infinite domains  &  Domains where the size of the set is infinite.
 686 This can include discrete domains such as the integers.
 687 CITATION(aitextbook) (p 139) \\ \hline
 688 
 689 Informed Search  &  Strategies that know whether one non-goal state
 690 is more promising than another non-goal state are called informed or
 691 heuristic searches. \\ \hline
 692 
 693 Initial State  &  The state that the agent starts in. \\ \hline
 694 
 695 Instantiated variable  &  A variable is instantiated if it has been
 696 assigned a value from its domain. CITATION(ainotes) \\ \hline
 697 
 698 Intended interpretation  &  The intended interpretation is the one
 699 possible interpretation that we specify. CITATION(aitextbook) (p
 700 246)
 701 \\ \hline
 702 
 703 Intensively defined constraint (not in textbook)  &  Intensionally
 704 defined constraints are used when it is not practical or even
 705 possible to list all tuples. This is the form we normally see using
 706 variables and relationship predicates. CITATION(ainotes) \\ \hline
 707 
 708 Interpretation  &  The semantics of FOL must relate sentences to
 709 models in order to determine truth. An interpretation specifies
 710 exactly which objects, relations and functions are referred to by
 711 the constant, predicate and function symbols.CITATION(aitextbook) (p
 712 246) An {\em interpretation} for a set $X$ of formulas {\bf is a
 713 domain} $D$ together with a rule that \begin{itemize} \item assigns
 714 to each $n$-place predicate symbol (that occurs in a formula) of $X$
 715 an $n$-place predicate in $D$; \item assigns to each $n$-place
 716 operation symbol of $X$ an $n$-place operation in $D$; \item assigns
 717 to each constant symbol of $X$ an element of $D$; \item assigns to
 718 $=$ the identity predicate $=$ in $D$, defined by: $a=b$ is true if
 719 and only if $a$ and $b$ are the same. CITATION(FOML) \end{itemize} \\
 720 \hline
 721 
 722 Intractability  &  We say an algorithms is intractable if the time
 723 to run the algorithm increases exponentially as a function of the
 724 size of the problem. (From 310 or 823?) \\ \hline
 725 
 726 Intractability  &  A problem is called intractable if the time
 727 required to solve instance of the problme grows exponentially with
 728 the size of the instances. (p 8) \\ \hline
 729 
 730 Iterative-Deepening Search  &  Also called the {\em Iterative
 731 deepening depth-first search}, this is just a DFS performed to a
 732 depth $d$ where $d$ iteratively increases from $0$ to some limiting
 733 value $l$ until a goal is found. this will occur when $d$ reaches
 734 the shallowest goal state. Space complexity will be the same as the
 735 DFS $O(bd)$. Unlike DFS it is complete and finds the optimal goal
 736 when the path cost is a non-decreasing function of the depth of the
 737 node. {\em In general, iterative deepening is the preferred
 738 uninformed search method when there is a large search space and the
 739 depth of the solution is not known}. \\ \hline
 740 
 741 Iterative-Lengthening Search  &  Identical to the
 742 Iterative-Deepening Search except that the search uses an increasing
 743 path-cost limit instead of an increasing depth limit. This
 744 guarantees optimality while avoiding expensive memory requirements.
 745 \\ \hline
 746 
 747 k-CNF  &  A sentence expressed as a conjunction of disjunctions of
 748 literals is said to be in conjunctive normal form. A sentence is
 749 $k-$CNF if it has exactly (or at most) $k$ literals per clause.
 750 CITATION(aitextbook) (p 215) \\ \hline
 751 
 752 \end{tabular}
 753 
 754 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
 755 
 756 Knowledge acquisition  &  Knowledge acquisition is the process of
 757 acquiring knowledge. This can be done by exploration or by (p261)
 758 applying to an outside expert to retrieve the knowledge required. \\
 759 \hline
 760 
 761 Knowledge base  &  A set (conjunction) of sentences that represent
 762 facts about the world. CITATION(aitextbook) (p 195) \\ \hline
 763 
 764 Knowledge representation  &  One of the six disciplines of AI
 765 dedicated to the "representation" of knowledge that an entity knows
 766 or senses. CITATION(aitextbook) \\ \hline
 767 
 768 Knowledge representation language  &  A formal language used to
 769 represent sentences of information in the knowledge base.
 770 CITATION(aitextbook) (p 195) \\ \hline
 771 
 772 Leaf Node  &  A leaf node is an element of the fringe, that is, a
 773 node with no successors in the tree. \\ \hline
 774 
 775 Limited rationality  &  It is not always possible to do precisely
 776 the right thing in complicated environments do to high computational
 777 demands. CITATION(aitextbook) \\ \hline
 778 
 779 Linear constraints  &  Constraints in which each variable appears
 780 only in linear form. CITATION(ainotes) \\ \hline
 781 
 782 Linear Programming  &  Linear programming involves CSPs where
 783 constraints must be linear inequalities forming a convex region.
 784 CITATION(aitextbook) p 140 \\ \hline
 785 
 786 Literal  &  A literal is single symbol or its negation.
 787 CITATION(aitextbook) (p 204) \\ \hline
 788 
 789 Local Optimum  &  This corresponds to a peak or a valley in the
 790 state space landscape that is not the best solution. This is were a
 791 search may get stuck and need a random restart. \\ \hline
 792 
 793 Local Search  &   Local searches do not care about the path too the
 794 solution, and so they don't keep track of where they have been. The
 795 use very little memory and they can often find solutions in large or
 796 infinite (continuous) state spaces for which systematic algorithms
 797 are unavailable. \\ \hline
 798 
 799 Logical connectives  &  Not, conjunction, disjunction implication,
 800 biconditional which are respectively: $\neg \wedge, \vee,
 801 \Rightarrow, \Leftrightarrow$. CITATION(aitextbook) (p 204-205) \\
 802 \hline
 803 
 804 Logical equivalence  &  Two sentences $A$ and $B$ are logically
 805 equivalent if they are true in the same set of models. This is
 806 written: $A \equiv B$. CITATION(aitextbook) (p 210) \\ \hline
 807 
 808 Min-conflict heuristic  &  The min-conflicts heuristic chooses a
 809 value for a variable that causes the fewest conflicts with other
 810 variables. CITATION(ainotes) \\ \hline
 811 
 812 Minimax algorithm  &  The minimax algorithm computes the minimax
 813 decision form the current state. It uses a simple recursive
 814 computation of the minimax values of each successor state, directly
 815 implementing the defining equations. The recursion proceeds all the
 816 way down to the leaves of the tree, and then the minimax values are
 817 backed up through the tree as the recursions unwinds.
 818 CITATION(aitextbook) (p 165) \\ \hline
 819 
 820 Minimax decision  &  When the Minimax value has been computed for
 821 each child node in the tree for the current node, the Minimax
 822 decision is the largest value of the child nodes. That is we choose
 823 the action that is the optimal choice (max value) because it
 824 guarantees the highest minimum value. CITATION(aitextbook) (p 165) \\
 825 \hline
 826 
 827 Minimax value &
 828 \begin{equation}
 829 Minimax\text{-}Value(n)=\left\{
 830 \begin{array}[c]{ll}
 831 Utility\left(n\right)                    & \text{if }n\text{ is a terminal state}\\
 832 \max_{s\in Succ(n)}Minimax\text{-}Val(s) & \text{if }n\text{ is a }\max\text{ node}\\
 833 \min_{s\in Succ(n)}Minimax\text{-}Val(s) & \text{if }n\text{ is a
 834 }\min\text{ node.}
 835 \end{array}
 836 \right.
 837 \end{equation}
 838 \\ \hline
 839 
 840 Minimum remaining values (a.k.a. least domain variable)  &  A
 841 heuristic that chooses the variable with the fewest legal
 842 values.CITATION(aitextbook) (p 143) \\ \hline
 843 
 844 Missionaries and cannibals problem  &  Three missionaries and three
 845 cannibals are on one side of a river, along with a both that can
 846 hold one or two people.. Find a way to get everyone to the other
 847 size without ever leaving a group of missionaries in one place
 848 outnumbered by the cannibals in that place (p 90). \\ \hline
 849 
 850 Model (in general)  &  A model is a world in which a sentence is
 851 true under a particular interpretation. CITATION(ainotes) (11.4) \\
 852 \hline
 853 
 854 Model checking  &  Given a knowledge base KB and a sentence
 855 $\alpha$, model checking enumerates all possible models to check
 856 that a sentence $\alpha$ is true in all models in which KB is true.
 857 OR, model checking ensures that $\alpha$ is true in each model
 858 contained in KB. CITATION(aitextbook) (p 202) \\ \hline
 859 
 860 Model in propositional logic  &  A model is a mapping from
 861 proposition symbols directly to truth or falsehood. The models of a
 862 sentence are the mappings that make the sentence true.
 863 CITATION(ainotes) \\ \hline
 864 
 865 Modus ponens  &  $\frac{\alpha \Rightarrow \beta,~~ \alpha}{\beta}$
 866 CITATION(aitextbook) (p 211) \\ \hline
 867 
 868 \end{tabular}
 869 
 870 
 871 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
 872 
 873 Monotonicity (as a property of the $h$ function  &  A heuristic
 874 function $h(n)$ is said to be monotone if it satisfies: \[ h(n) \leq
 875 c(n,n') + h(n') \forall n,n' | n' \in SCS(n) \] where $SCS(n)$ gives
 876 the successors of $n$. \\ \hline
 877 
 878 Monotonicity (of inference)  &  Monotonicity says that the set of
 879 entailed sentences can only increase as information is added to the
 880 knowledge base. CITATION(aitextbook) (p 213) \\ \hline
 881 
 882 Mutation  &   In genetic algorithms after a crossover between two
 883 states has been completed we will sometimes randomly change the
 884 state by randomly changing one element of the state. This is called
 885 a mutation. \\ \hline
 886 
 887 Natural Language Processing  &  The ability of the computer to
 888 process English or some other natural language to communicate
 889 successfully. CITATION(aitextbook) \\ \hline
 890 
 891 Node consistency  &  A term meaning that a CSP is 1-consistent, or
 892 that any node by itself is consistent. CITATION(aitextbook) (p 147) \\
 893 \hline
 894 
 895 Node expansion (in search)  &  After we examine a state (node) $x$
 896 and determine that it is not a goal state, we determine the
 897 successor states (nodes) using the successor function. These set of
 898 states (nodes) are the expansion of $x$ (p 69). \\ \hline
 899 
 900 NP-completeness  &  The theory of NP-completeness, provides a method
 901 to reconize an intractable problem. Any problem class to which the
 902 class of NP-complete problems can be reduced is likely to be
 903 intractable. Cook and Karp showed the existence of large classes of
 904 canonical combinatorial search and resoning problems that are
 905 NP-complete. CITATION(aitextbook) \\ \hline
 906 
 907 Objective function  &  Objective functions are part of the
 908 description of an optimization problem that measures states to
 909 determine the best state. (from Glossary 6) \\ \hline
 910 
 911 Objective function (in optimization problem)  &  Objective functions
 912 are part of the description of an optimization problem that measures
 913 states to determine the best state. \\ \hline
 914 
 915 Occam's (or Ockham's) razor  &  is a principle attributed to the
 916 14th-century English logician and Franciscan friar, William of
 917 Ockham. It forms the basis of methodological reductionism, also
 918 called the principle of parsimony or law of economy. In its simplest
 919 form, Occam's Razor states that one should make no more assumptions
 920 than needed. Put into everyday language, it says: Given two equally
 921 predictive theories, choose the simpler.
 922 CITATION(Wikipedia:OccamsRazor) \\ \hline
 923 
 924 Omniscience  &  Literally "all knowing". In the AI sense it means
 925 knowing all the information necessary to make the best choice that
 926 optimizes the performance measure. \\ \hline
 927 
 928 Open-loop  &  When given a {\bf static}, {\bf observable}, {\bf
 929 discrete}, {\bf deterministic} environment, solutions can be
 930 executed without pay attention to the percepts. Thus an agent
 931 carries out its plan with its eyes closed. Control theorists call
 932 this an {\bf open-loop} because it breaks the loop between agent and
 933 environment. \\ \hline
 934 
 935 Operator  &  Successor functions gives a description of the possible
 936 actions available to the agent. An alternate formulation uses a set
 937 of operators that can be applied to a state to generate successors.
 938 (p 62) \\ \hline
 939 
 940 Optimal solution  &  According to AIAMA: The optimal solution has
 941 the lowest path cost among all solutions. \\ \hline
 942 
 943 Optimally efficient  &  If an algorithm is optimally efficient it
 944 means that no other optimal algorithm is guaranteed to expand fewer
 945 nodes except possibly through tie-breaking among nodes with
 946 $f(n)=C^*$. {\em We note that $A^*$ is optimally efficient {\bf and}
 947 that it is also exponential in practice.} \\ \hline
 948 
 949 Optimization Problem  &  Optimization problem define as a goal to
 950 find the best state according to an objective function. \\ \hline
 951 
 952 Path  &  A {\em path} in the state space is a sequence of states
 953 connected by a sequence of actions. \\ \hline
 954 
 955 Path consistency  &  Any pair of adjacent variables can always be
 956 extended to a third neighboring variable - also called
 957 $3-$consistency. CITATION(aitextbook) (p 147) \\ \hline
 958 
 959 Path Cost  &  A {\em path cost} function assigns a numeric cost to
 960 each path from the root node to any node in the tree. \\ \hline
 961 
 962 Pathmax Equation CITATION(astar)  &  The PathMax function is an
 963 optimization routine that ensures monotonicity: \[
 964 \hat{f}(m)=\max\{f(n),f(m)\} \] for all $m \in \{succ(n)\}$. \\
 965 \hline
 966 
 967 \end{tabular}
 968 
 969 
 970 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
 971 
 972 Payoff function  &  Also called the {\bf objective function} or {\bf
 973 utility function}, it gives the numeric value for the terminal
 974 states. (Objective and utility functions generally give a value to
 975 each state). CITATION(aitextbook) (p 163) \\ \hline
 976 
 977 Percept Sequence  &  An agent's percept sequence is the complete
 978 history of everything the agent has ever perceived. \\ \hline
 979 
 980 Performance Measure  &  A performance measure embodies the criterion
 981 for success of an agent's behavior. \\ \hline
 982 
 983 Planning  &  Planning is the process of finding a sequence that
 984 achieves a desired effect given a suitable constructive inference
 985 algorithm (p 330). \\ \hline
 986 
 987 Plateau  &  An area in the state space landscape where the
 988 evaluation function is flat. I could be a flat local maximum or a
 989 shoulder. \\ \hline
 990 
 991 Ply  &  This is essentially a turn in a turn taking game. One move
 992 is made up of two different players in a two player game each taking
 993 a turn. Each turn is called a half move or ply. CITATION(aitextbook)
 994 (p 163) \\ \hline
 995 
 996 Predecessors  &  Predecessors of a state $x$ are all those states
 997 that have $x$ as a successor. \\ \hline
 998 
 999 Predicate symbol  &  Stand for relations in FOL.
1000 CITATION(aitextbook) (p 246) \\ \hline
1001 
1002 Premise  &  The premise or antecedent is the left hand side of an
1003 implication ($\Rightarrow$). CITATION(aitextbook) (p 205) \\ \hline
1004 
1005 Problem Formulation  &  Problem formulation is the process of
1006 deciding what actions and states to consider, given a goal. A
1007 problem can be defined formally by four components:
1008 \begin{enumerate} \setlength{\topsep}{0in} \setlength{\parsep}{0in}
1009 \setlength{\parskip}{0in} \setlength{\itemsep}{0in} \item initial
1010 state \item successor function \item goal test \item path cost
1011 \end{enumerate} \\ \hline
1012 
1013 Proof  &  A sequence of applications of inference rules is called a
1014 proof. CITATION(aitextbook) (p 212) \\ \hline
1015 
1016 Property  &  Unary relations CITATION(aitextbook) (p 242) \\ \hline
1017 
1018 Proposition symbol  &  A symbol that can be assigned true or false
1019 which represents a proposition (or statement). CITATION(aitextbook)
1020 (p 204) \\ \hline
1021 
1022 Propositional logic  &  Propositional logic is a very simple
1023 language consisting of proposition symbols and logical connectives.
1024 It can handle propositions that are known true, known false, or
1025 completely unknown. CITATION(aitextbook) (p 233) Propositional logic
1026 is made up of a set of symbols and boolean connectives $\wedge,
1027 \vee, \neg, \Rightarrow, \Leftrightarrow$ formed into sentences
1028 using the formal grammar form: Backus-Naur Form. The semantics are
1029 defined by the operation of the connectives as either True or False.
1030 CITATION(ainotes) (11.3, 11.7) \\ \hline
1031 
1032 Pruning  &  When a node $n'$ is explored and placed in the fringe as
1033 part of a node expansion of $n$ and $h(n') > h(n)$ then $A^*$ will
1034 not never expand this node, because $f(n') > C^*$. When this happens
1035 we say that $n'$ is {\em pruned}. \\ \hline
1036 
1037 Pruning (in general)  &  Pruning is the process of eliminating
1038 possibilities from consideration without having to examine them. In
1039 a tree structure we often say that we prune a subtree or branch by
1040 eliminating it from consideration. CITATION(aitextbook) (p 100) \\
1041 \hline
1042 
1043 Rationality  &  Rationality (restricted rationality) is the ideal
1044 concept of intelligence (restrcited rationality is listed above) It
1045 simply means to do the right thingCITATION(aitextbook) \\ \hline
1046 
1047 Reduction ad absurdum  &  Proving $\beta$ from $\alpha$ by checking
1048 the unsatisfiability of $(\alpha \wedge \neg \beta)$ corresponds
1049 exactly to the standard mathematical proof technique of {\em
1050 reductio ad aburdum}. This is also called refutation: $\alpha
1051 \models \beta$ if and only if the sentence $(\alpha \wedge \neg
1052 \beta)$ is unsatisifiable. CITATION(aitextbook) (p 211) \\ \hline
1053 
1054 Refutation completeness  &  Any complete search algorithm, applying
1055 only the resolution rule, can derive any conclusion entailed by any
1056 knowledge base in propositional logic. There is a caveat: resolution
1057 is complete in a specialized sense. Given that $A$ is true, we can
1058 not use the resolution to automatically generate the consequence $A
1059 \vee B$ is true. This is called {\bf refutation completeness},
1060 meaning that resolution can always be used to either confirm or
1061 refute a sentence, but it cannot be used to enumerate true
1062 sentences. CITATION(aitextbook) (p 214) \\ \hline
1063 
1064 Relation  &  A relation between $A$ and $B$ is defined as $R_{A,B}
1065 \subseteq A \times B$ or  $R_{A,B} \subseteq \{ (x,y) | x \in A, y
1066 \in B\}$ CITATION(ainotes) \\ \hline
1067 
1068 Relation (mathematical definition)  &  A relation between $A$ and
1069 $B$ is defined as $R_{A,B} \subseteq A \times B$ or  $R_{A,B}
1070 \subseteq \{ (x,y) | x \in A, y \in B\}$ CITATION(ainotes) \\ \hline
1071 
1072 \end{tabular}
1073 
1074 
1075 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
1076 
1077 Relaxed problem  &  A problem with fewer restrictions on the actions
1078 is called a {\em relaxed problem}. The cost of an optimal solution
1079 to a relaxed problem is an admissible heuristic for the original
1080 problem. \\ \hline
1081 
1082 Resolution  &  If $l_i$ and $m_j$ are complementary literals, then
1083 we have resolution defined as: \begin{equation} \frac{l_1 \vee ...
1084 \vee l_k, ~~~~ m_1 \vee ... \vee m_n} {l_1 \vee ... \vee l_{i-1}
1085 \vee l_{i+1} \vee ... \vee l_k \vee m_1 \vee ... \vee m_{j-1} \vee
1086 m_{j+1} \vee ... \vee m_n} \end{equation} CITATION(aitextbook) (p
1087 214)
1088 \\ \hline
1089 
1090 Resolution closure  &  Given a set of clauses $S$, resolution
1091 closure is the set of all derivable clauses. CITATION(aitextbook) (p
1092 217) \\ \hline
1093 
1094 Route-finding problem &  The Route-finding problem is defined in
1095 terms of locations and transitions along links between them. \\
1096 \hline
1097 
1098 Satisfiability  &  A sentence is satisfiable if it is true in some
1099 model. CITATION(aitextbook) (p 211) \\ \hline
1100 
1101 Search  &  The search process consists of finding a path from the
1102 initial state to a goal state (p 60). The essence of a search is to
1103 check a state (node) to determine if it satisfies the goal state and
1104 expanding it to find other options to examine later if it does not.
1105 (p 69) \\ \hline
1106 
1107 Search cost (vs. total cost)  &  Search cost typically depends on
1108 the time complexity , but can also include a term for memory usage.
1109 Total cost combines the search cost and the path cost of the
1110 solution found (p 72). \\ \hline
1111 
1112 Search node  &  The search node corresponds to the node representing
1113 the initial state. (p69) \\ \hline
1114 
1115 Search Strategy  &  The {\em search strategy} determines which which
1116 state to expand next. \\ \hline
1117 
1118 Search Tree  &  A tree structure generated from the initial state
1119 and children found using the successor function defines the search
1120 tree. \\ \hline
1121 
1122 Semantics (of a logic)  &  The semantics of the language defines the
1123 truth of each sentence with respect to each possible world.
1124 CITATION(aitextbook) (p 207) \\ \hline
1125 
1126 Sensorless problem  &  (Also called conformant problems) If an agent
1127 has no sensors at all, then it could be in one of several possible
1128 initial states, and each action might lead to one of several
1129 possible successor states. To know that a goal state has been
1130 reached, all states in the successor set must be goal states. \\
1131 \hline
1132 
1133 Sentence  &  A sentence is a representation of a fact in the world
1134 in a {\em formal language}. CITATION(ainotes) (10.3) \\ \hline
1135 
1136 Sideway move  &  When reaching a shoulder, we can not find a state
1137 to move to that is better than the state we are in. To fix this we
1138 allow moving to states that are equal in hopes of finding a way off
1139 a shoulder. This is called a sideways move. \\ \hline
1140 
1141 Simulated annealing  &  This combines hill climbing with random walk
1142 in such a way that the amount of randomness slowly decreases (cools)
1143 over time. \\ \hline
1144 
1145 Size of a CSP  &  The size of a CSP is characterized by the number
1146 of constraints. CITATION(ainotes) \\ \hline
1147 
1148 Solution quality  &  Solution quality is measured by the path cost
1149 function. \\ \hline
1150 
1151 Soundness (of inference)  &  An inference algorithm that derives
1152 only entailed sentences is called sound. CITATION(aitextbook) (p
1153 203)
1154 \\ \hline
1155 
1156 State space  &  Together, the initial state and successor function
1157 implicitly define the state space which is the set of all states
1158 reachable from the initial state. \\ \hline
1159 
1160 Step Cost  &  The step cost is the cost of taking an action $a$ to
1161 go from state $x$ to state $y$ is denoted by $c(x,a,y)$. \\ \hline
1162 
1163 Straight-line distance  &  The shortest distance between two points.
1164 This is an especially interesting heuristic, because it will always
1165 be optimistic no mater how we define points and measures we use for
1166 distance in our problem since the shortest distance between two
1167 points is a straight line. \\ \hline
1168 
1169 Strong $k-$consistency  &  A graph is strongly $k-$consistent if it
1170 is $i-$consistent for $i \in {1,..,k}$. CITATION(aitextbook) (p 147) \\
1171 \hline
1172 
1173 Substitution  &  When posing a query $\exists x~Predicate(x)$,
1174 answering yes is not very helpful. The standard form of an answer to
1175 such a query includes a set of values that satisfy the query. This
1176 set is called a substitution or binding list. CITATION(aitextbook)
1177 (p 254) \\ \hline
1178 
1179 Successor Function (in search) &  The successor function is a
1180 function $S:x \in {s_i} \rightarrow S(s_i):\{<A_j , S_{i+1}^j>^*\}$
1181 where $s_i$ is a state, $A_j$ are actions and $S_{i+1}^j$ is a
1182 state. [notes] \\ \hline
1183 
1184 Syllogism  &  A Syllogism is an inference in which one proposition
1185 (conclusion) follows necessarily from two other premises. It is
1186 irrefutable reasoning processes (or patterns) that consequently
1187 always yield correct conclusions. CITATION(aitextbook) \\ \hline
1188 
1189 \end{tabular}
1190 
1191 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
1192 
1193 Syntax (of a logic)  &  Defines the sentences in a language
1194 (grammar) CITATION(ainotes) (10.16) \\ \hline
1195 
1196 Task Environment$^*$  &  PEAS: Performance, Environment, Actuators,
1197 Sensors. \\ \hline
1198 
1199 Tautology  &  A sentence that is valid. CITATION(aitextbook) (p 210) \\
1200 \hline
1201 
1202 Term  &  A term is a logical expression that referes to an object.
1203 CITATION(aitextbook) (p 248) \\ \hline
1204 
1205 Terminal states  &  States where the game has ended are called
1206 terminal states. CITATION(aitextbook) (p 163) \\ \hline
1207 
1208 Terminal test  &  A terminal test determines when the game is over.
1209 CITATION(aitextbook) (p 163) \\ \hline
1210 
1211 Ternary constraint  &  A constraint involving 3 variables.
1212 CITATION(ainotes) \\ \hline
1213 
1214 Theorem  &  Theorems are sentences entailed by axioms.
1215 CITATION(aitextbook) (p 255) \\ \hline
1216 
1217 Total Turing test  &  A human judge engages in a natural language
1218 conversation with two other parties, one a human and the other a
1219 machine; if the judge cannot reliably tell which is which, then the
1220 machine is said to pass the test. To this "Turing test" we add a
1221 video signal and the opportunity to pass physical object "through
1222 the hatch" to get a total Turing test. \cite{Wikipedia:TuringTest,
1223 aitextbook} \\ \hline
1224 
1225 Touring Problem  &  Visit every city (vertice) given at least once
1226 starting and ending at a specific city. Or more precisely given a
1227 graph $G=(V,E)$ and an initial vertex $v_0$ give a sequence of edges
1228 such that each vertex is visited at least once (p 67). \\ \hline
1229 
1230 Toy Problems  &  A toy problem is intended to illustrate or exercise
1231 various problem-solving methods. It can be given a concise, exact,
1232 description and is easily used by different researchers to compare
1233 the performance of algorithms. \\ \hline
1234 
1235 Transpositions  &  In games, repeated states occur frequently
1236 because of transpositions which are different permutations of the
1237 move sequence that end up in t6he same position (or state).
1238 CITATION(aitextbook) (p 170) \\ \hline
1239 
1240 Traveling Salesperson Problem (TSP - give a {\em formal} definition)
1241 CITATION(NPCompleteness) &  INSTANCE: A finite set
1242 $C={c_1,c_2,...,c_m}$ of cities, a distance $d(c_i,c)j) \in Z^+$ for
1243 each pair of cities $c_i,c_j \in C$, a bound $B \in Z^+$. QUESTION:
1244 Is there a "tour" of all the cities in $C$ having total length no
1245 more than $B$, that is, an ordering
1246 $<c_{\pi(1)},c|{\pi(2)},...,c_{\pi(m)}>$ of $C$ such that
1247 \begin{equation}
1248 \left(\sum_{i=1}^{m-1}d(c_{(i)},c_{(i+1)})\right)+d(c_{\pi(m)},c_{\pi(1)}=B?
1249 \end{equation} \\ \hline
1250 
1251 Tree decomposition (of a CSP)  &  The process of decomposing a
1252 constraint graph into a tree of connected components. A tree
1253 decomposition must satisfy the following three requirements: (1)
1254 Every variable in the original problem appears in at least one of
1255 the subproblems. (2) If two variables are connected by a constraint
1256 in the original problem, they must appear together (along with the
1257 constraint) in at least one of the subproblems. (3) If a variable
1258 appears in two subproblems in the tree, it must appear in every
1259 subproblem along the path connecting the those subproblems.
1260 CITATION(aitextbook) (p 154) \begin{definition} A
1261 \emph{tree-decomposition} of a graph $G$ is a tree $T$ whose nodes,
1262 called bags, are subsets of $V\left( G\right) $ such that:
1263 \begin{enumerate} \item $\cup _{x\in V\left( T\right) }X=V\left(
1264 G\right) ;$ \item $\forall \left\{ u,v\right\} \in E\left( G\right)
1265 $, $\exists X\in   V\left( T\right) $ such that $u,v\in X;$ and
1266 \item $\forall X,Y,Z\in V\left( T\right) $, if $Y$ is on the path
1267 from $X$   to $Z$ in $T$ then $X\cap Z\subseteq Y$. From Dourisboure
1268 and Gavoille CITATION(doorisboure03) \end{enumerate}
1269 \end{definition}
1270 \\ \hline
1271 
1272 Tree width of a graph (check a book on graph theory)  &  Let $T$ be
1273 a tree-decomposition fo a graph $G$. The width of $T$ is
1274 \begin{equation} width(T) =\max_{X\in T}\{ \left\vert X\right\vert
1275 -1\} \end{equation} Let $D$ be the set of all decompositions of the
1276 tree $T$. Then the tree width is $\min_{T\in D}width\left( T\right)
1277 $. CITATION(doorisboure03) \\ \hline
1278 
1279 Triangle inequality  &  In AI we say \[ h(n) \leq c(n, a, n') +
1280 h(n') \] is the triangle inequality for paths from one node to
1281 another in the search tree. \\ \hline
1282 
1283 Truth table  &  A truth table specifies the truth value of a complex
1284 sentence for each possible assignment of truth values.
1285 CITATION(aitextbook) (p 206) \\ \hline
1286 
1287 \end{tabular}
1288 
1289 
1290 \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
1291 
1292 Turing machine  &  A Turing machine is a 7-tuple, $\left(  Q,\Sigma
1293 ,\Gamma,\delta,q_{0},q_{accept},q_{reject}\right)  $, where $Q$ is
1294 the set of start states, $\Sigma$ is the input alphabet not
1295 containing the blank symbol, $\Gamma$ is the tape alphabet, where
1296 $\_\in\Gamma$ and $\Sigma\subseteq\Gamma $, $\delta :
1297 Q\times\Gamma\rightarrow Q\times\Gamma\times\left\{ L,R\right\}  $
1298 is the transition function, $q_{0}\in Q$ is the start state,
1299 $q_{accept}\in Q$ is the accept state, and $q_{reject}\in Q$ is the
1300 reject state, where $q_{accept}\neq q_{reject}$.
1301 CITATION(IntroToTheTheoryOfComputation2) \\ \hline
1302 
1303 Turing test  &  A human judge engages in a natural language
1304 conversation with two other parties, one a human and the other a
1305 machine; if the judge cannot reliably tell which is which, then the
1306 machine is said to pass the test. CITATION(Wikipedia:TuringTest) \\
1307 \hline
1308 
1309 Unary constraint  &  The simplest kind of constraint which restricts
1310 a single variable. CITATION(aitextbook) (p 140) \\ \hline
1311 
1312 Uniform-cost search  &  This strategy expands the node $n$ with the
1313 lowest path cost. Thus this will use a $min-priority-queue$ to
1314 determine which nodes to explore and expand next. If all step costs
1315 are equal this is identical to the BFS. Let $C^*$ be the cost of the
1316 optimal solution, and assume that every action costs at least
1317 $\epsilon$. Then the algorithms's worst-case time and space
1318 complexity is $O(b^{1+\lfloor C^*/\epsilon \rfloor})$ which can be
1319 much greater than $b^d$. \\ \hline
1320 
1321 Uninformed search  &  These types of searches have no information
1322 beyond what is given in the problem definition (this is also called
1323 the blind search). All they can do is generate successors and
1324 distinguish a goals state from a non-goal state. There are five
1325 given in Section 3.4: BFS, Uniform-Cost Search, DFS, Depth-limited
1326 Search, Iterative deepening depth-first search, and bidirectional
1327 search. \\ \hline
1328 
1329 Unit clause  &  A unit clause is a cloause with just one literal.
1330 CITATION(aitextbook) (p 221) \\ \hline
1331 
1332 Universal constraint  &  A statement that the two variable are not
1333 involved in a constraint together. CITATION(ainotes) \\ \hline
1334 
1335 Utility Function  &  A utility function maps a state (or sequence of
1336 states) onto a real number, which describes the associated degree of
1337 happiness. This allows rational decisions to be made when goals are
1338 inadequate. (1) When goals conflict and (2) When uncertainty is
1339 attached to several goals the utility function can give a way to
1340 determine which is goal is most likely achievable. \\ \hline
1341 
1342 Validity  &  A sentence is valid if it is true in all models.
1343 CITATION(aitextbook) (p 210) \\ \hline
1344 
1345 Validity (of a logical sentence)  &  A sentence is valid if it is
1346 true in all models. CITATION(ainotes) \\ \hline World
1347 
1348 Value ordering heuristic  &  Determines the order in which you
1349 choose values to assign to a variable from the domain.
1350 CITATION(aitextbook) (p 143) \\ \hline
1351 
1352 Variable (of a CSP)  &  Variables are part of a CSP that can take
1353 values defined by their domains. CITATION(aitextbook) (p 137) \\
1354 \hline
1355 
1356 Variable ordering heuristic  &  Determines the order in which you
1357 assign variables. CITATION(aitextbook) (p 143) \\ \hline
1358 
1359 Zero-sum games  &  Means environments in which the utility values at
1360 the end of the game are always equal and opposite.
1361 CITATION(aitextbook) (p 161) \\ \hline
1362 
1363 \end{tabular}
1364 
1365 \bigskip
1366 All definitions without a citation come from CITATION(aitextbook)
1367 
1368 \end{document}

Attached Files

To refer to attachments on a page, use attachment:filename, as shown below in the list of files. Do NOT use the URL of the [get] link, since this is subject to change and can break easily.
  • [get | view] (2020-01-26 20:50:57, 210.8 KB) [[attachment:AIGlossary.pdf]]
  • [get | view] (2020-01-26 20:50:52, 60.3 KB) [[attachment:AIGlossary.tex]]
 All files | Selected Files: delete move to page copy to page

You are not allowed to attach a file to this page.