Jingyi Wang

WORLD is a Python-based video tool designed to enable users to perceive the existence of beings in digital videos.
    Technology has revolutionized from ancient times to today, evolving beyond a mere extension of the human body into an almost autonomous system. This encompasses everything from household cables to smart cities and brainwave imaging. Despite using natural senses to interact with technology, many of its aspects remain incomprehensible to the human brain. This leads to a new sensory experience governed by technology. The question arises: how do technologies reshape our perception of the world? WORLD offers an opportunity for users to understand how technology impacts their existence through digital video technology.

Python Applictaion (Preliminary Model), An Archive of Rendered Videos, Immersive Experience Design
Interactive Art   Experimental Film   World Building   Multimedia Animation   Computational Video Art   Computational Design
Individual Project
Created and Exhibited during Nov-Dec2023
Critique of Technology, Digital Archive, Poetic Coding, Digital Ontology
Object Detection, GPT, Python, Ffmpeg

WORLD: Version 0.0 User Interface

WORLD: Object Prediction on Video with YOLO

WORLD: Processed Video Output

#12/1/2022 UPDATES#

WORLD 0.0 (A Preliminary Model) ⭐

This is a preliminary model that conducts object prediction on input videos and places cut-out bounding boxes on a canvas alongside a basic user interface.

WORLD 1.0 is currently undergoing fine-tuning and will be released as an open tool on this website.


WORLD is a Python-based video tool designed to enable users to perceive the existence of beings in digital videos. It is intended for use with any archive, performing object detection on input videos and producing a re-constructed video. This output shows a grid-based canvas, with a collection of military-style terms generated from the names of detected objects printed on screen.

Read more about the concept in the next session.

⭐In this upcoming version, the tool will be updated to include GPT text generation and Ffmpeg post processing.  ⭐


The output shows a grid-based canvas, with rolling text descripting an army structure, generated by taking references from class names of predicted objects in the video.

This update reflects my conceptual response to the vast digital video archive saturating every corner of the internet. Stemming from the notion of the incomprehensibility of technology, I aim to visualize how contemporary life is intricately entwined with technology, permeating every aspect. The tightly-knit and expansive organization of this digital realm mirrors a heavily guarded military structure.

Within information technologies like digital video, entities undergo layered technological processes—capturing, modeling, encoding, decoding—before entering our cognition. Is the world displayed by computers a part of reality? Does ontology exist in the electronic realm? This tool will playfully and structurally reconstruct any provided video, prompting a fresh exploration and scrutiny of how commonplace video technologies reshape our perception of the world.

Object Detection + GPT text generation
Ffmpeg Post Processing


MY ROOM is an immersive experience that invites audiences into the intimate confines of an artificial intelligence's private space. Upon entering, visitors encounter a room featuring a expansive white city model, showcasing strategies in electronic video technology alongside a silhouette embodying humanized intelligence. At the room's end, a bedroom displays a TV featuring a curated mix of trending videos processed by WORLD. Throughout, an 'elder' delivers a continuous monologue, analyzing visitors' appearances through cameras and offering insights rooted in stereotypes.

Challenging the boundaries of intelligence and exploring the tech-dependent facets of human life, the design not only scrutinizes the effective use of computational design but also vividly presents the physical and mental dilemmas inherent in an increasingly complex technological future.

Due to budget constraints, this experience currently exists solely in digital form. It is slated to be showcased in the School of Poetic Computation class in December 2023.