A/I and Predicting Sales

Fellow appraiser and certified USPAP instructor Tim Luke sent me an interesting article from Artsy on how technology and artificial intelligence is changing the art market and potentially connoisseurship. Many art professionals begrudgingly look at art application technologies and artificial intelligence, but if we dont embrace it or at least understand and accept it, we will be out of the equation and suffer because of it, and perhaps be left behind.

All appraisers should be aware of the these potential changes and technological applications.

Artsy reports on the analysis of Rothko works and the confirmation process (or not) from upcoming sales. The article is long, so I only took a portion of it to post. Follow the below source link for the full article and contest.

Artsy reports
Imagine standing in front of a Rothko and really looking at it. You will quickly notice the number of rectangles, their colors, and how they compare with the monochromatic field on which they are painted. These are easily measured aspects of the painting. But there are so many other qualities to a Rothko that are hard to verbalize, let alone quantify, that contribute to its emotive beauty, such as how the edges of the rectangles dissolve into the base color, the luminosity of the paint, and the unusual spatial relationships created by the juxtapositions of color and form. Our eyes see this information and transmit it to our brains, provoking an emotional response.

Artificial intelligence now enables machines to view the world, in some respects, as humans do and allows them to use that knowledge for a variety of tasks, including driving autonomous cars and monitoring people in Times Square using video surveillance systems. The revolution in computer vision is due in large part to a pattern-recognition algorithm called a Convolutional Neural Network (CNN). A CNN looks at pixels in digital images and finds patterns in them, without the machine first being told what to look for. Put another way, this technique involves the machine extracting underlying characteristics of an image on its own, including characteristics that are difficult to pre-specify. We used this method to analyze digital images of Rothko paintings, generating information that could be used to predict sale prices.

To build our model, we created a database of all Classic Years and late Years of Transition works by the artist that have sold at auction since 2000, a total of 118 objects. The database includes not only all-in sale prices (hammer price plus buyer’s premium) and object descriptors (size, date painting was made, date it was sold, painting on canvas or paper, etc.) gathered from the artnet price database, but also digital images of each work that we pulled from the web. As a potential replacement for the digital images, we also hand scored the formal properties of each painting: the number of color blocks, number of horizontal stripes the artist may have used to separate these color blocks, the dominant color in the painting, and the background color. These four variables would typically be assembled by an appraiser to compare such paintings. In addition to these “supply” variables, we also gathered various measures of the “demand” for art, such as growth in worldwide wealth, growth in various stock indexes, and the aggregate wealth and total count of billionaires in the Forbes Billionaires Index. We then used these data to develop a model that would predict auction sale prices.

The model we created is surprisingly accurate, with just a 5.5% margin of error for past sales. This means that the difference between the actual sale price and the forecasted sale price was on average just 5.5% across all the paintings in our database. What makes this best-performing model so interesting is that its predictions are based simply on the digital image plus five variables: painting height and width, whether it’s a work on paper or canvas, the number of billionaires in the world, and the wealth they control. None of the other variables in our database appeared relevant to the price, including the date the painting was made. We also compared our computer-generated model with our own assessments, replacing the digital image with the four hand-scored variables mentioned above. With those hand-scored variables, the prediction error soared to 20%, making it woefully inadequate as a model. That difference is a vivid reminder a machine can often “look” at a painting more incisively than the human eye.

A few general observations about the model before putting it to work. First, the number of billionaires and the wealth they control were by far the most important demand-side variables in explaining sale prices. Second, size matters: The larger the work, the more valuable it is, all else being held constant. Third, paintings on paper trade at a significant discount compared to paintings on canvas, all else held constant. Lastly, brightly colored orange and purple paintings that pop off the wall tend to open buyers’ wallets more than darkly colored browns and grays.

Source: Artsy

No comments: