what-the-plot-WND-31Z-eleanor-edwards.jpg

Background

 

Throughout this project I have been investigating my perception of the imbalance between the art object and the audience which the environment of an
art gallery can provoke within visitors, by positioning them with lesser importance to the artwork through the use of physical and mental barriers, in a space which should actually be inviting and embracing. My thoughts turned to the perception of perfection and higher status held by audience members, in particular those who would consider themselves ‘outside’ the art world, i.e. those who are not supposed to ‘get it’. As well as considering the constant strive for perfection that artists themselves undertake.


This led me to a different approach than previous projects where I have been investigating ways to break down the barriers built by galleries by using methods of active interaction and connected out puts. This time, instead of acting at the end point of the process, where the art is so to speak, complete, and the audience interacts with it, I decided to work further up the chain, at the point of artistic production.

This decision came through conversations with my audience and research undertaken over the course of this project, but also through personal reflection and understanding of my own actions and motivations as an artist.

My initial concept proposed to use the x/y axis of a plotter set up with a custom tool head that would extrude (much like a 3D printer does) magic sand, a material renowned for its water-resistant quality so that when dropped into water it holds its form as blobs of sand. The positions at which the sand would be dropped would be determined by soundscapes recorded from non-art gallery settings. This was to encourage a human connection and interaction with the piece and the outcome, whilst limiting active interaction of the audience. It is important to me to not put the onus on the audience to make my work be something or do something. Making labourers out of them does not reinstate them as key players within the sphere of the art object. However, through reflective conversations over my MVPs for this project, where I successfully translated sound data into G-code and demonstrated how the magic sand would be affected by this, it became apparent from audience feedback that there was a disconnect with their understanding and my intentions.

At the same time, whilst I was learning and playing around with the plotter setup I accidentally got some of my machine code wrong, meaning that instead of the good morning message I intended to plot for my housemate, I plotted all the inverse marks instead (where the pen tool was supposed to be down and drawing it was up, and vice versa). This was a mistake, a failure, but what surprised me, was that although I considered it as such, the outcome had beauty in its own way. This was partly because the bold lines felt very cubist and abstract to me, a style of artwork I particularly enjoy, but also because I had not been expecting this result. I could have stopped the plot as soon as I realised but I decided to let it play out. It was at this point I realised there was another way I could actively combat the hushed aura of the art world. Rather than put it on the audience to continue to deceiver complex concepts presented to them by artists and art galleries, I could instead actively promote that the path to “success” is paved with “failures” and in turn dissolve the barriers between the audience and this so-called Art World they often feel excluded from even when in Art’s own house.

Process

 

To begin with, I followed the tutorial by Lewis from DIY Machines, on building the plotter he designed. This tutorial is presented very well and takes you through each step whilst also explaining the significance. Lewis’ design uses 3D printed parts which he supplies the STL files for.

It runs on an Arduino Uno with a CNC Shield and 2 Stepper Drivers for the X and Y axis Stepper Motors. I chose to use TMC2208 Stepper Drivers due to their quietness. The Pen Mount slides up and down the Y end through a Linear Bearing and is controlled by a servo motor mounted at the opposite end of the Y shaft, connected by timing belt. The X and Y axis also use timing belts, to move the trolley along the rail (X-axis) and move the linear rods through two Linear Bearings (Y-axis). Lewis also provides designs for an electronics housing for the Arduino and the CNC Shield to be enclosed in, with a side mounted cooling fan. I made some modifications to this housing to add the WND-31Z branding and adapted to fit the size of the barrel connector I had. After some use of the plotter, I also decided to add a second set of supports to hold an 25mm M5 bolt and another toothed Idler Wheel to help guide the Servo Timing Belt as I found the tension point within the original design did not allow to enough force in the right place for the pen to clear the paper.

I made further adaptations to Lewis’ designs to enable the use of a double Pen Mount, the key feature for Wednesday’s third mode, Redact, the only mode using physical modification rather than software. I made all of my modifications in AutoDesk Fusion 360, converting the Meshes to Bodies to be able to work from the STL files. The double pen mount required factoring points in which both Y ends could be extended, to include a second servo and timing belt and how these attachments could easily be added and removed, whilst not impacting the single pen configuration.

In terms of software, I worked mainly in Processing to build a custom process of converting image files into plottable G-Code. Often the process used by the Plotting Community is to use a program like InkScape to convert SVG files into G-Code, and running some features in there to create hatched looks, before using the exported code within a workspace like ChiliPeppr or G-Code Sender to connect to the plotter. I wanted something that could artistically convert images, translate this into G-Code and connect and send to the plotter all in one platform, using an environment like Processing enabled me to do as such.

To be able to establish the connection with Wednesday and Processing over Serial I had to understand what the handshake process was. I used the Arduino Serial Monitor to see what the GRBL configuration sends back once the connection has been established. Building from MarkJB’s GitHub repo and my own investigations into GRBL and G-Code I have been able to implement code into Processing that will connect over Serial, send initial set up commands ( the plotter is locked when it first connects and must receive the $H homing command before anything can happen ) as well as write functions that will format Points generated in Processing into G-Code movements e.g. G01 X150 Y200 F2000, will move in a straight line to point (150, 200) and a feed rate of 2000. It is also important to have a send and receive process in order to not overload the buffer.

To convert image files into plottable points I used davekch’s Linerizer implementation of the Floyd Steinberg dithering technique. This process uses the contrasting light and dark parts in the image to generate a set of points, visually connected as lines and redraw the image in line art. I had to reduce the number of points used, as a pen stroke is much thicker than the Processing line, however, I have made it so that the user can set this to their desired effect.

Linerizer also uses the Drop Library meaning that the sketch window does not need to be defined within setup and can cope with different sized images. I have built upon these Drop functions to enable multiple pictures to be ‘dropped’ into the window one after the other, resetting the Linerizer effect and enabling new points to be generated. This is key for the Redact mode, when two sets of code need to be generated

one after the other.

Most of my testing involved testing the physical set-up which for ease I used the web-based ChiliPeppr environment as this enabled me to have a visual representation of the G-Code and the expected outcome. However, when I was testing the Processing side I used images with basic shapes so as to easily work with checking the G-Code was formatted correctly, the modes were acting as I expected, inverting certain commands, removing correct lines of code, but leaving the fundamental ones, and blending two sets of data together. This made it more manageable.