Reflection 11
This week began with the reading and discussion of The Design of Everyday Things and Co-Intelligence. The discussion on knowledge was the first topic that sparked my interest, because with most things we design we assume the user has SOME level of pre-existing knowledge regarding our design. Whether this be an interface, a stove, or even a poster advertisement, we design for somebody, attempting to appeal to a group. But then Norman began speaking of behavior and our conception of things without needing a complete knowledge, through arranging certain things or copying others. A specific quote I love is “it is actually quite amazing how often it is possible to hide one’s ignorance, to get by without understanding or even much interest.” The reason I love this is think of it in today’s world. Think of how much easier it is to get by without actually knowing because we have AI integration. Students get by papers and hard classes, artists get imagery instantly, etc. We hide that we may not be willing to put in the time to do things ourselves, not when there’s a quicker, and often (in a student’s eyes) better option. Does AI count as knowledge in the head or knowledge in the world? Or is it a combination of both? It is providing knowledge to complete tasks, making it available in the world, but we are also learning how to use it right, making it knowledge in the head. However, not everyone cares how to particularly use it a certain way, more commonly interacting in a “monkey see monkey do” relationship with language models. I believe that a lot of people will allow it to complete the task they want it to complete and the importance of human quality will be lost, believing that quality is already present because it checked a box. Or, as Norman talks about, the result will be “good enough” and the memory of human precision will be lost to AI.
In regards to Praxis Inquiry 3, I must admit that I’m pretty happy to close that project. It’s not that I disliked it exactly, it was more of the stress surrounding this time in the semester and my difficulty to arrange time in advance outside of class to work on it. Even in class I have a really hard time actually designing things, I prefer my own set up and nobody around to peer over my shoulder as I experiment. However, the class time was really helpful with ideation and strategy, as Preston and I completed almost the entire process and idea of our agent while in class. Admittedly, he is much more well-versed in AI than I am and definitely took it upon himself to do a deep dive into our project with Chat. I appreciated his involvement a lot as he brought up perspectives I wouldn’t have thought to ask about. I especially enjoyed our class workshop when we created our own gpt. Not only did I find that to be fascinating, it was extremely helpful for the design of our project. I used it extensively while creating the interface; it shocked me how real it really felt. Although I felt rushed and stressed majority of the time, just because of the circumstances outside of class, I learned a lot in this inquiry that still made it a fruitful experience. I learned a ton about the process of immigration, opening up a whole new feeling of empathy towards those that have to go through that tedious process, and broadened my knowledge on security, privacy, and biometric use. Because we were creating something that was a tool-based agent implemented through already existing technology, I’ll admit that they AI mock ups and design was a little less fun. However, I was challenged to design a whole new interface for something that doesn’t exist yet. Once I got started I really enjoyed being pushed to make something totally different and new from interfaces I’ve interacted with before. Lastly, I think Preston and I were a good match in the end but for a second there, we were totally flailing. Our schedules and procrastinating nature really tested our limits, but our communication ended up being really great and we were able to bounce ideas off each other and learn from our various strengths. I’ll admit that I felt I did not have as much control in some of our decisions or explanations and sometimes my preliminary work would be disregarded, but honestly, I kind of needed a partner that was overly involved and willing to take charge this time. In the end, I’m very pleased with how our project turned out (even though our presentation was so long!)
One AI update that really stood out to me is the improvement in generating longer video clips, used in a cartoon example here. NVIDIA and Stanford researchers just unveiled "Test-Time Training," which was producing animations that were a minute long. The biggest thing that impressed me was the improvement in consistency across the scenes and the natural, dynamic movement. I think back to when we produced the cartoon animations in class and how much I struggled to get my dog to A) stay the same dog for the whole 10 seconds and B) walk naturally and dynamically. Seeing how much better the AI has already become is extremely impressive but also kind of scary, as I think the animation industry will probably freak out a little. Hopefully this will be a fruitful tool for them and not a reliance!