Uncategorized

Revolution Is a Dinner Party -- Rogue Pluralism in China

Once a system "proves its mettle by attaining human-level intelligence", funding for hardware could multiply. I agree that funding for AI could multiply manyfold due to a sudden change in popular attention or political dynamics. But I'm thinking of something like a factor of 10 or maybe 50 in an all-out Cold War-style arms race.

A factor-of boost in hardware isn't obviously that important. If before there was one human-level AI, there would now be In any case, I expect the Sputnik moment s for AI to happen well before it achieves a human level of ability. Companies and militaries aren't stupid enough not to invest massively in an AI with almost-human intelligence. Once the human level of intelligence is reached, "Researchers may work harder, [and] more researchers may be recruited".

As with hardware above, I would expect these "shit hits the fan" moments to happen before fully human-level AI. At some point, the AI's self-improvements would dominate those of human engineers, leading to exponential growth. I discussed this in the "Intelligence explosion? A main point is that we see many other systems, such as the world economy or Moore's law, that also exhibit positive feedback and hence exponential growth, but these aren't "fooming" at an astounding rate.

It's not clear why an AI's self-improvement -- which resembles economic growth and other complex phenomena -- should suddenly explode faster in subjective time than humanity's existing recursive-self improvement of its intelligence via digital computation. On the other hand, maybe the difference between subjective and objective time is important. If a human-level AI could think, say, 10, times faster than a human, then assuming linear scaling, it would be worth 10, engineers. By the time of human-level AI, I expect there would be far more than 10, AI developers on Earth, but given enough hardware, the AI could copy itself manyfold until its subjective time far exceeded that of human experts.

The speed and copiability advantages of digital minds seem perhaps the strongest arguments for a takeoff that happens rapidly relative to human observers. That said, there should be plenty of slightly sub-human AIs by this time, and maybe they could fill some speed gaps on behalf of biological humans. In general, it's a mistake to imagine human-level AI against a backdrop of our current world. That's like imagining a Tyrannosaurus rex in a human city. Rather, the world will look very different by the time human-level AI arrives.

Before AI can exceed human performance in all domains, it will exceed human performance in many narrow domains gradually, and these narrow-domain AIs will help humans respond quickly. For example, a narrow AI that's an expert at military planning based on war games can help humans with possible military responses to rogue AIs. Many of the intermediate steps on the path to general AI will be commercially useful and thus should diffuse widely in the meanwhile.

As user "HungryHobo" noted: For instance, Bostrom mentions how in the flash crash Box 2, p. This is already an example where problems happening faster than humans could comprehend them were averted due to solutions happening faster than humans could comprehend them. See also the discussion of "tripwires" in Superintelligence p. Conversely, many globally disruptive events may happen well before fully human AI arrives, since even sub-human AI may be prodigiously powerful. Hence, the project might take off and leave the world behind. What one makes of this argument depends on how many people are needed to engineer how much progress.

The Watson system that played on Jeopardy! Watson was a much smaller leap forward than that needed to give a general intelligence a take-over-the-world advantage. How many more people would be required to achieve such a radical leap in intelligence? This seems to be a main point of contention in the debate between believers in soft vs. Can we get insight into how hard general intelligence is based on neuroscience? Is the human brain fundamentally simple or complex? Jeff Hawkins, Andrew Ng, and others speculate that the brain may have one fundamental algorithm for intelligence -- deep learning in the cortical column.

This idea gains plausibility from the brain's plasticity. For instance, blind people can appropriate the visual cortex for auditory processing. Artificial neural networks can be used to classify any kind of input -- not just visual and auditory but even highly abstract, like features about credit-card fraud or stock prices.

Maybe there's one fundamental algorithm for input classification, but this doesn't imply one algorithm for all that the brain does. Beyond the cortical column, the brain has many specialized structures that seem to perform very specialized functions, such as reward learning in the basal ganglia, fear processing in the amygdala, etc. Of course, it's not clear how essential all of these parts are or how easy it would be to replace them with artificial components performing the same basic functions.

One argument for faster AGI takeoffs is that humans have been able to learn many sophisticated things e. And what we now know doesn't seem to represent any kind of limit to what we could know with more learning. The human collection of cognitive algorithms is very flexible, which seems to belie claims that all intelligence requires specialized designs.

On the other hand, even if human genes haven't changed much in the last 10, years, human culture has evolved substantially, and culture undergoes slow trial-and-error evolution in similar ways as genes do. So one could argue that human intellectual achievements are not fully general but rely on a vast amount of specialized, evolved content. Just as a single random human isolated from society probably couldn't develop general relativity on his own in a lifetime, so a single random human-level AGI probably couldn't either.

Culture is the new genome, and it progresses slowly. Moreover, some scholars believe that certain human abilities, such as language, are very essentially based on genetic hard-wiring:. The approach taken by Chomsky and Marr toward understanding how our minds achieve what they do is as different as can be from behaviorism.

The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the "black box" that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.

There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology. So it's not implausible that a lot of the brain's basic architecture could be similarly hard-coded. Typically AGI researchers express scorn for manually tuned software algorithms that don't rely on fully general learning.

But Chomsky's stance challenges that sentiment. If Chomsky is right, then a good portion of human "general intelligence" is finely tuned, hard-coded software of the sort that we see in non-AI branches of software engineering. And this view would suggest a slower AGI takeoff because time and experimentation are required to tune all the detailed, specific algorithms of intelligence. A full-fledged superintelligence probably requires very complex design, but it may be possible to build a "seed AI" that would recursively self-improve toward superintelligence.

Alan Turing proposed this in his " Computing machinery and intelligence ":. Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer's. Rather little mechanism, and lots of blank sheets.

Mechanism and writing are from our point of view almost synonymous. Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. Animal development appears to be at least somewhat robust based on the fact that the growing organisms are often functional despite a few genetic mutations and variations in prenatal and postnatal environments.

Such variations may indeed make an impact -- e. On the other hand, an argument against the simplicity of development is the immense complexity of our DNA. It accumulated over billions of years through vast numbers of evolutionary "experiments". It's not clear that human engineers could perform enough measurements to tune ontogenetic parameters of a seed AI in a short period of time. And even if the parameter settings worked for early development, they would probably fail for later development. Rather than a seed AI developing into an "adult" all at once, designers would develop the AI in small steps, since each next stage of development would require significant tuning to get right.

Think about how much effort is required for human engineers to build even relatively simple systems. For example, I think the number of developers who work on Microsoft Office is in the thousands. Microsoft Office is complex but is still far simpler than a mammalian brain. Brains have lots of little parts that have been fine-tuned. That kind of complexity requires immense work by software developers to create. The main counterargument is that there may be a simple meta-algorithm that would allow an AI to bootstrap to the point where it could fine-tune all the details on its own, without requiring human inputs.

This might be the case, but my guess is that any elegant solution would be hugely expensive computationally. For instance, biological evolution was able to fine-tune the human brain, but it did so with immense amounts of computing power over millions of years. A common analogy for the gulf between superintelligence vs. In Consciousness Explained , Daniel Dennett mentions pp. This might incline one to imagine that brain size alone could yield superintelligence. Maybe we'd just need to quadruple human brains once again to produce superintelligent humans?

If so, wouldn't this imply a hard takeoff, since quadrupling hardware is relatively easy? But in fact, as Dennett explains, the quadrupling of brain size from chimps to pre-humans completed before the advent of language, cooking, agriculture, etc. In other words, the main "foom" of humans came from culture rather than brain size per se -- from software in addition to hardware. Yudkowsky seems to agree: But cultural changes software arguably progress a lot more slowly than hardware.

The intelligence of human society has grown exponentially, but it's a slow exponential, and rarely have there been innovations that allowed one group to quickly overpower everyone else within the same region of the world. Between isolated regions of the world the situation was sometimes different -- e. Some, including Owen Cotton-Barratt and Toby Ord , have argued that even if we think soft takeoffs are more likely, there may be higher value in focusing on hard-takeoff scenarios because these are the cases in which society would have the least forewarning and the fewest people working on AI altruism issues.

This is a reasonable point, but I would add that. In any case, the hard-soft distinction is not binary, and maybe the best place to focus is on scenarios where human-level AI takes over on a time scale of a few years. Timescales of months, days, or hours strike me as pretty improbable, unless, say, Skynet gets control of nuclear weapons. In Superintelligence , Nick Bostrom suggests Ch. Ord contrasts this with benefits of starting early, including course-setting. I think Ord's counterpoints argue against the contention that early work wouldn't matter that much in a slow takeoff.

Some of how society responded to AI surpassing human intelligence might depend on early frameworks and memes. For instance, consider the lingering impact of Terminator imagery on almost any present-day popular-media discussion of AI risk. Some fundamental work would probably not be overthrown by later discoveries; for instance, algorithmic-complexity bounds of key algorithms were discovered decades ago but will remain relevant until intelligence dies out, possibly billions of years from now. Some non-technical policy and philosophy work would be less obsoleted by changing developments.

And some AI preparation would be relevant both in the short term and the long term. Slow AI takeoff to reach the human level is already happening, and more minds should be exploring these questions well in advance. Making a related though slightly different point, Bostrom argues in Superintelligence Ch.

Even if one does wish to bet on low-probability, high-impact scenarios of fast takeoff and governmental neglect, this doesn't speak to whether or how we should push on takeoff speed and governmental attention themselves. Following are a few considerations. One of the strongest arguments for hard takeoff is this one by Yudkowsky:.

Or as Scott Alexander put it:. It took evolution twenty million years to go from cows with sharp horns to hominids with sharp spears; it took only a few tens of thousands of years to go from hominids with sharp spears to moderns with nuclear weapons. I think we shouldn't take relative evolutionary timelines at face value, because most of the previous 20 million years of mammalian evolution weren't focused on improving human intelligence; most of the evolutionary selection pressure was directed toward optimizing other traits.

In contrast, cultural evolution places greater emphasis on intelligence because that trait is more important in human society than it is in most animal fitness landscapes. Still, the overall point is important: The tweaks to a brain needed to produce human-level intelligence may not be huge compared with the designs needed to produce chimp intelligence, but the differences in the behaviors of the two systems, when placed in a sufficiently information-rich environment, are huge.

Nonetheless, I incline toward thinking that the transition from human-level AI to an AI significantly smarter than all of humanity combined would be somewhat gradual requiring at least years if not decades because the absolute scale of improvements needed would still be immense and would be limited by hardware capacity. But if hardware becomes many orders of magnitude more efficient than it is today, then things could indeed move more rapidly.

Another important criticism of the "village idiot" point is that it lacks context. While a village idiot in isolation will not produce rapid progress toward superintelligence, one Einstein plus a million village idiots working for him can produce AI progress much faster than one Einstein alone. The narrow-intelligence software tools that we build are dumber than village idiots in isolation, but collectively, when deployed in thoughtful ways by smart humans, they allow humans to achieve much more than Einstein by himself with only pencil and paper.

This observation weakens the idea of a phase transition when human-level AI is developed, because village-idiot-level AIs in the hands of humans will already be achieving "superhuman" levels of performance.

Member: Chew_Man

If we think of human intelligence as the number 1 and human-level AI that can build smarter AI as the number 2, then rather than imagining a transition from 1 to 2 at one crucial point, we should think of our "dumb" software tools as taking us to 1. My thinking on this point was inspired by Ramez Naam. Some people infer from these accomplishments that AGI may not be far off. I think performance in these simple games doesn't give much evidence that a world-conquering AGI could arise within a decade or two.

A main reason is that most of the games at which AI has excelled have had simple rules and a limited set of possible actions at each turn. Russell and Norvig , pp. The state of a game is easy to represent, and agents are usually restricted to a small number of actions whose outcomes are defined by precise rules. For example, AlphaGo's "policy networks" gave "a probability value for each possible legal move i.

Likewise, DeepMind's deep Q-network for playing Atari games had "a single output for each valid action" Mnih et al. In contrast, the state space of the world is enormous, heterogeneous, not easily measured, and not easily represented in a simple two-dimensional grid. Plus, the number of possible actions that one can take at any given moment is almost unlimited; for instance, even just considering actions of the form "print to the screen a string of uppercase or lowercase alphabetical characters fewer than 50 characters long", the number of possibilities for what text to print out is larger than the number of atoms in the observable universe.

Some people may be impressed that AlphaGo uses "intuition" i. But the idea that computers can have "intuition" is nothing new, since that's what most machine-learning classifiers are about. Machine learning, especially supervised machine learning, is very popular these days compared against other aspects of AI.

Perhaps this is because unlike most other parts of AI, machine learning can easily be commercialized? But even if visual, auditory, and other sensory recognition can be replicated by machine learning, this doesn't get us to AGI. In my opinion, the hard part of AGI or at least, the part we haven't made as much progress on is how to hook together various narrow-AI modules and abilities into a more generally intelligent agent that can figure out what abilities to deploy in various contexts in pursuit of higher-level goals.

Hierarchical planning in complex worlds, rich semantic networks, and general "common sense" in various flavors still seem largely absent from many state-of-the-art AI systems as far as I can tell. I don't think these are problems that you can just bypass by scaling up deep reinforcement learning or something. Kaufman a says regarding a conversation with professor Bryce Wiedenbeck: If something like today's deep learning is still a part of what we eventually end up with, it's more likely to be something that solves specific problems than as a critical component.

Two lines of evidence for this view are that 1 supervised machine learning has been a cornerstone of AI for decades and 2 animal brains, including the human cortex, seem to rely crucially on something like deep learning for sensory processing. However, I agree with Bryce that there remain big parts of human intelligence that aren't captured by even a scaled up version of deep learning.

I also largely agree with Michael Littman's expectations as described by Kaufman b: He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. Merritt quotes Stuart Russell as saying that modern neural nets "lack the expressive power of programming languages and declarative semantics that make database systems, logic programming, and knowledge systems useful.

Yudkowsky a discusses some interesting insights from AlphaGo's matches against Lee Sedol and DeepMind more generally. I agree with Yudkowsky that there are domains where a new general tool renders previous specialized tools obsolete all at once. The October architecture was simple and, so far as I know, incorporated very little in the way of all the particular tweaks that had built up the power of the best open-source Go programs of the time. Judging by the October architecture, after their big architectural insight, DeepMind mostly started over in the details though they did reuse the widely known core insight of Monte Carlo Tree Search.

This is a good point, but I think it's mainly a function of the limited complexity of the Go problem. With the exception of learning from human play, AlphaGo didn't require massive inputs of messy, real-world data to succeed, because its world was so simple. Go is the kind of problem where we would expect a single system to be able to perform well without trading for cognitive assistance. Real-world problems are more likely to depend upon external AI systems—e. No simple AI system that runs on just a few machines will reproduce the massive data or extensively fine-tuned algorithms of Google search.

For the foreseeable future, Google search will always be an external "polished cognitive module" that needs to be "traded for" although Google search is free for limited numbers of queries. The same is true for many other cloud services, especially those reliant upon huge amounts of data or specialized domain knowledge.

We see lots of specialization and trading of non-AI cognitive modules, such as hardware components, software applications, Amazon Web Services, etc.

Random books from Chew_Man's library

And of course, simple AIs will for a long time depend upon the human economy to provide material goods and services, including electricity, cooling, buildings, security guards, national defense, etc. Estimating how long a software project will take to complete is notoriously difficult. Even if I've completed many similar coding tasks before, when I'm asked to estimate the time to complete a new coding project, my estimate is often wrong by a factor of 2 and sometimes wrong by a factor of 4, or even Insofar as the development of AGI or other big technologies, like nuclear fusion is a big software or more generally, engineering project, it's unsurprising that we'd see similarly dramatic failures of estimation on timelines for these bigger-scale achievements.

A corollary is that we should maintain some modesty about AGI timelines and takeoff speeds. If, say, years is your median estimate for the time until some agreed-upon form of AGI, then there's a reasonable chance you'll be off by a factor of 2 suggesting AGI within 50 to years , and you might even be off by a factor of 4 suggesting AGI within 25 to years.

Similar modesty applies for estimates of takeoff speed from human-level AGI to super-human AGI, although I think we can largely rule out extreme takeoff speeds like achieving performance far beyond human abilities within hours or days based on fundamental reasoning about the computational complexity of what's required to achieve superintelligence.


  1. Saturday Nights with Daddy at the Opry?
  2. Beyond Atlantis.
  3. L.A. Dark (Jeremys Run Book 1);
  4. A History of Video Art - Chris Meigh-Andrews - Häftad () | Bokus!
  5. A Piece of History!

My bias is generally to assume that a given technology will take longer to develop than what you hear about in the media, a because of the planning fallacy and b because those who make more audacious claims are more interesting to report about. Believers in "the singularity" are not necessarily wrong about what's technically possible in the long term though sometimes they are , but the reason enthusiastic singularitarians are considered "crazy" by more mainstream observers is that singularitarians expect change much faster than is realistic.

AI turned out to be much harder than the Dartmouth Conference participants expected. Likewise, nanotech is progressing slower and more incrementally than the starry-eyed proponents predicted. Many nature-lovers are charmed by the behavior of animals but find computers and robots to be cold and mechanical.

Conversely, some computer enthusiasts may find biology to be soft and boring compared with digital creations.

Revolution Is a Dinner Party: Rogue Pluralism in China by M. Eigh

However, the two domains share a surprising amount of overlap. Ideas of optimal control, locomotion kinematics, visual processing, system regulation, foraging behavior, planning, reinforcement learning, etc. Neuroscientists sometimes look to the latest developments in AI to guide their theoretical models, and AI researchers are often inspired by neuroscience, such as with neural networks and in deciding what cognitive functionality to implement.

I think it's helpful to see animals as being intelligent robots. Organic life has a wide diversity, from unicellular organisms through humans and potentially beyond, and so too can robotic life. The rigid conceptual boundary that many people maintain between "life" and "machines" is not warranted by the underlying science of how the two types of systems work. Different types of intelligence may sometimes converge on the same basic kinds of cognitive operations, and especially from a functional perspective -- when we look at what the systems can do rather than how they do it -- it seems to me intuitive that human-level robots would deserve human-level treatment, even if their underlying algorithms were quite dissimilar.

Whether robot algorithms will in fact be dissimilar from those in human brains depends on how much biological inspiration the designers employ and how convergent human-type mind design is for being able to perform robotic tasks in a computationally efficient manner. In one YouTube video about robotics, I saw that someone had written a comment to the effect that "This shows that life needs an intelligent designer to be created.

Of course, there are theists who say God used evolution but intervened at a few points, and that would be an apt description of evolutionary robotics. The distinction between AI and AGI is somewhat misleading, because it may incline one to believe that general intelligence is somehow qualitatively different from simpler AI.

In fact, there's no sharp distinction; there are just different machines whose abilities have different degrees of generality. A critic of this claim might reply that bacteria would never have invented calculus. My response is as follows. Most people couldn't have invented calculus from scratch either, but over a long enough period of time, eventually the collection of humans produced enough cultural knowledge to make the development possible. Likewise, if you put bacteria on a planet long enough, they too may develop calculus, by first evolving into more intelligent animals who can then go on to do mathematics.

The difference here is a matter of degree: The simpler machines that bacteria are take vastly longer to accomplish a given complex task. Just as Earth's history saw a plethora of animal designs before the advent of humans, so I expect a wide assortment of animal-like and plant-like robots to emerge in the coming decades well before human-level AI. Indeed, we've already had basic robots for many decades or arguably even millennia. These will grow gradually more sophisticated, and as we converge on robots with the intelligence of birds and mammals, AI and robotics will become dinner-table conversation topics.

Of course, I don't expect the robots to have the same sets of skills as existing animals. Deep Blue had chess-playing abilities beyond any animal, while in other domains it was less efficacious than a blade of grass. Robots can mix and match cognitive and motor abilities without strict regard for the order in which evolution created them.

And of course, humans are robots too. When I finally understood this around , it was one of the biggest paradigm shifts of my life. If I picture myself as a robot operating on an environment, the world makes a lot more sense. I also find this perspective can be therapeutic to some extent. If I experience an unpleasant emotion, I think about myself as a robot whose cognition has been temporarily afflicted by a negative stimulus and reinforcement process. I then think how the robot has other cognitive processes that can counteract the suffering computations and prevent them from amplifying.

The ability to see myself "from the outside" as a third-person series of algorithms helps deflate the impact of unpleasant experiences, because it's easier to "observe, not judge" when viewing a system in mechanistic terms. Compare with dialectical behavior therapy and mindfulness. When we use machines to automate a repetitive manual task formerly done by humans, we talk about getting the task done "automatically" and "for free," because we say that no one has to do the work anymore. Of course, this isn't strictly true: Maybe what we actually mean is that no one is going to get bored doing the work, and we don't have to pay that worker high wages.

When intelligent humans do boring tasks, it's a waste of their spare CPU cycles. Sometimes we adopt a similar mindset about automation toward superintelligent machines. Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines [ Thus the first ultraintelligent machine is the last invention that man need ever make [ Ignoring the question of whether these future innovations are desirable, we can ask, Does all AI design work after humans come for free?

It comes for free in the sense that humans aren't doing it. But the AIs have to do it, and it takes a lot of mental work on their parts. Given that they're at least as intelligent as humans, I think it doesn't make sense to picture them as mindless automatons; rather, they would have rich inner lives, even if those inner lives have a very different nature than our own.

Maybe they wouldn't experience the same effortfulness that humans do when innovating, but even this isn't clear, because measuring your effort in order to avoid spending too many resources on a task without payoff may be a useful design feature of AI minds too. When we picture ourselves as robots along with our AI creations, we can see that we are just one point along a spectrum of the growth of intelligence. Unicellular organisms, when they evolved the first multi-cellular organism, could likewise have said, "That's the last innovation we need to make.

The rest comes for free. Movies typically portray rebellious robots or AIs as the "bad guys" who need to be stopped by heroic humans. This dichotomy plays on our us-vs. We see similar dynamics at play to a lesser degree when people react negatively against "foreigners stealing our jobs" or "Asians who are outcompeting us. But when we think about the situation from the AI's perspective, we might feel differently.

See a Problem?

Anthropomorphizing an AI's thoughts is a recipe for trouble, but regardless of the specific cognitive operations, we can see at a high level that the AI "feels" in at least a poetic sense that what it's trying to accomplish is the most important thing in the world, and it's trying to figure out how it can do that in the face of obstacles. Isn't this just what we do ourselves? This is one reason it helps to really internalize the fact that we are robots too.

We have a variety of reward signals that drive us in various directions, and we execute behavior aiming to increase those rewards. Many modern-day robots have much simpler reward structures and so may seem more dull and less important than humans, but it's not clear this will remain true forever, since navigating in a complex world probably requires a lot of special-case heuristics and intermediate rewards, at least until enough computing power becomes available for more systematic and thorough model-based planning and action selection.

Suppose an AI hypothetically eliminated humans and took over the world. It would develop an array of robot assistants of various shapes and sizes to help it optimize the planet. These would perform simple and complex tasks, would interact with each other, and would share information with the central AI command.

From an abstract perspective, some of these dynamics might look like ecosystems in the present day, except that they would lack inter-organism competition. Other parts of the AI's infrastructure might look more industrial. Depending on the AI's goals, perhaps it would be more effective to employ nanotechnology and programmable matter rather than macro-scale robots.

The AI would develop virtual scientists to learn more about physics, chemistry, computer hardware, and so on. They would use experimental laboratory and measurement techniques but could also probe depths of structure that are only accessible via large-scale computation. Digital engineers would plan how to begin colonizing the solar system. They would develop designs for optimizing matter to create more computing power, and for ensuring that those helper computing systems remained under control. The AI would explore the depths of mathematics and AI theory, proving beautiful theorems that it would value highly, at least instrumentally.

The AI and its helpers would proceed to optimize the galaxy and beyond, fulfilling their grandest hopes and dreams. When phrased this way, we might think that a "rogue" AI would not be so bad.

A History of Video Art

Yes, it would kill humans, but compared against the AI's vast future intelligence, humans would be comparable to the ants on a field that get crushed when an art gallery is built on that land. Most people don't have qualms about killing a few ants to advance human goals. An analogy of this sort is discussed in Artificial Intelligence: Perhaps the AI analogy suggests a need to revise our ethical attitudes toward arthropods? That said, I happen to think that in this case, ants on the whole benefit from the art gallery's construction because ant lives contain so much suffering.

The Origins of Video Art Chapter 2. Technology, Access and Context: Musique Concrete, Fluxus and Tape Loops: In and Out of the Studio: Accessible Video Editing Chapter The Gallery Opens its Doors: Video Installation and Projection Chapter The Ubiquity of the Video Image: Artists' Video as an International Phenomenon Part 3: Video as an Electronic Medium Chapter The Means of Production: Video Sculpture and Installation Chapter Video Art in the New Millennium: To see what your friends thought of this book, please sign up.

To ask other readers questions about Revolution Is a Dinner Party , please sign up. Be the first to ask a question about Revolution Is a Dinner Party. Lists with This Book. This book is not yet featured on Listopia. May 17, Vikas Datta rated it liked it. Quite a provocative and novel but well-argued viewpoint.. Sep 28, Chinook rated it it was ok Shelves: The argument is interesting but the spelling and grammar mistakes make me question its veracity. Charlee rated it it was ok Sep 11, Kevin M rated it liked it Dec 30, Kate rated it it was ok Aug 24, Stacy Lu rated it really liked it Jun 09, Gleb Poroger rated it it was amazing May 19, May 26, M.

Voicu rated it did not like it Jan 28, Jacqueline rated it it was ok Sep 24,