WORLD is a Python-based video tool designed to enable users to perceive the existence of beings in digital videos.
Technology has revolutionized from ancient times to today, evolving beyond a mere extension of the human body into an almost autonomous system. This encompasses everything from household cables to smart cities and brainwave imaging. Despite using natural senses to interact with technology, many of its aspects remain incomprehensible to the human brain. This leads to a new sensory experience governed by technology. The question arises:
how do technologies reshape our perception of the world? WORLD offers an opportunity for users to understand how technology impacts their existence through digital video technology.
WORLD: Version 0.0 User Interface
WORLD: Object Prediction on Video with YOLO
WORLD: Processed Video Output
⭐ WORLD 0.0 (A Preliminary Model) ⭐
This is a preliminary model that conducts object prediction on input videos and places cut-out bounding boxes on a canvas alongside a basic user interface.
WORLD 1.0 is currently undergoing fine-tuning and will be released as an open tool on this website.
WORLD 0.0: VISUALIZING VIDEO ONTOLOGY
WORLD is a Python-based video tool designed to enable users to perceive the existence of beings in digital videos. It is intended for use with any archive, performing object detection on input videos and producing a re-constructed video. This output shows a grid-based canvas, with a collection of military-style terms generated from the names of detected objects printed on screen.
Read more about the concept in the next session.
WORLD 1.0:
OBJECT PREDICTION,
GPT & FFMPEG
The output shows a grid-based canvas, with rolling text descripting an army structure, generated by taking references from class names of predicted objects in the video.
This update reflects my conceptual response to the vast digital video archive saturating every corner of the internet. Stemming from the notion of the incomprehensibility of technology, I aim to visualize how contemporary life is intricately entwined with technology, permeating every aspect. The tightly-knit and expansive organization of this digital realm mirrors a heavily guarded military structure.
Within information technologies like digital video, entities undergo layered technological processes—capturing, modeling, encoding, decoding—before entering our cognition. Is the world displayed by computers a part of reality? Does ontology exist in the electronic realm? This tool will playfully and structurally reconstruct any provided video, prompting a fresh exploration and scrutiny of how commonplace video technologies reshape our perception of the world.
MY ROOM, AN INTERACTIVE EXPERIENCE
MY ROOM is an immersive experience that invites audiences into the intimate confines of an artificial intelligence's private space. Upon entering, visitors encounter a room featuring a expansive white city model, showcasing strategies in electronic video technology alongside a silhouette embodying humanized intelligence. At the room's end, a bedroom displays a TV featuring a curated mix of trending videos processed by WORLD. Throughout, an 'elder' delivers a continuous monologue, analyzing visitors' appearances through cameras and offering insights rooted in stereotypes.
Challenging the boundaries of intelligence and exploring the tech-dependent facets of human life, the design not only scrutinizes the effective use of computational design but also vividly presents the physical and mental dilemmas inherent in an increasingly complex technological future.
Due to budget constraints, this experience currently exists solely in digital form. It is slated to be showcased in the School of Poetic Computation class in December 2023.