Saturday, August 28, 2010

The Construct: Self-Transforming Virtual Realities

A Discussion Between SeH and YKY, Edited for Blog Format

“I have no idea what you're talking about except that it sounds TOTALLY awesome.” --FreeOne3000

Redefining our Understanding of Graphical User Interfaces (GUI)




An Artificial Intelligence GUI: AI operating at the most fundamental drawing and input levels
a GUI that could potentialy predict user's action
at the resolution of mouse movements
That would be like... a tactile AGI

Fluxus - Livecoding Environment

Binds LISP to OpenGL graphics


Using AI to do tactile or physical reasoning
Which is ok... but textual reasoning is an entire different realm
sure theres multiple models that we could insert in-between. like a physics engine
i mean a pre-made physics engine, like Bullet
physics has the inherent feature of preventing two solid objects from consuming the same points in space

which is what graph layout and GUI design really is all about
now what we need is a time equivalent for deciding what objects are presented to the user
not everything at once, but a subset of the KB
and how that subset's boundaries change (expand/contract buttons, etc)

The Connection Between a Physical World and GUI

SpaceGraph-C 3D Interface (at www.youtube.com)

So, the physical objects are not really objects... what are they supposed to be?

reprsentations of data in the system
a button represents a possible action that can be invoked, a text represents a string, etc


Such as... a file? a document?

some data objects will have multiple reprsenations. then a meta-representation allows the user to select amongst them, or to adjust properties of the visualization.  ex: color, size etc


And they "feel" like physical objects... is that the idea?

the larger idea is that we can instantiate virtual agents in the space, making it self-aware as a cybernetic system

it traces optic rays.  mouse pointers are just a kind of light.
and the retina of virtual agents floating around can see the space from inside it.

a virtual agent is an embodiment of a program.  It's got to have algorithms - with inputs and outputs that change in realtime... otherwise it would seem dead or sleeping.

How can you represent algorithms in physical space?


text, graph vis, data flow, class hierarchy.  anything and everything imaginable

Or, maybe one algorithm is a box or cube-like thing?

sure an algorithm could be a cube which when clicked, explodes into a program tree
or the reason why something happens can have its proof tree attached semi-transparently

Oh I see... very ambitious idea... but inevitable

yes its the boundary between artificial intelligence and 'artificial life' fields

programming would consist of text-entry and drawing lines between things
wires, or rope - which can also be physically modeled
and have other objects attached to them (reification)

text is just another input modality. you would be free to use text-only
however you could still use some of the navigation possibilities while entering text
like advanced consoles

A New Form of Communication

i think its possible that a new form of communication can be invented
based on fractal language
which isnt content with just character-based text
but where size, position, etc matter
relative to other words
for example i could implement a zooming file-tree navigator

Perhaps that is what KB engineering is able
What you deal with is not programs, but KB items

well ultimately, all of these constructs can be encoded logically
perhaps learnable, from example or demonstration
"this rectangle is called a window. its x in the upper right corner that when clicked, closes it."
it will be interesting when the AI starts to generate new programs, surprising us
and find new ways of unifying different UI models
unifying or abstracting them
this will also allow the GUI to self-transform
according to human preferences

Inventing KB items is equivalent to generating new programs

once the system can be programmed from within itself, it will be fully reflective
thats the point where netbeans isnt necessary. in fact most programs we use will be generalized into this system's functionality

all forms of content creation and editing, communication, programming, information navigating, etc
like the construct in the matrix



also see: http://www.youtube.com/watch?v=rL-aP8HBIDk The Matrix Construct: The Cosmic Playground



this could fruitfully applied to KB engineering
We definitely need a cool GUI for manipulating the KB in Genifer

Where Did This Come From?

using a logic engine to learn OpenGL drawing commands
and interpret GUI input
a re-interpretation of the MVC pattern

correct MVC could apply to anything, console or even voice-only
OpenGL is just the canvas to draw on
OpenGL provides a complete set of drawing and FX primitives
it could also be HTML
also ive included WebGL which is soon to become standard
its in the beta versions of firefox, chrome and prolly some others


The Construct: Self-Transforming Virtual Realities

A Discussion Between SeH and YKY, Edited for Blog Format

“I have no idea what you're talking about except that it sounds TOTALLY awesome.” --FreeOne3000

Redefining our Understanding of Graphical User Interfaces (GUI)




An Artificial Intelligence GUI: AI operating at the most fundamental drawing and input levels
a GUI that could potentialy predict user's action
at the resolution of mouse movements
That would be like... a tactile AGI

Fluxus - Livecoding Environment

Binds LISP to OpenGL graphics


Using AI to do tactile or physical reasoning
Which is ok... but textual reasoning is an entire different realm
sure theres multiple models that we could insert in-between. like a physics engine
i mean a pre-made physics engine, like Bullet
physics has the inherent feature of preventing two solid objects from consuming the same points in space

which is what graph layout and GUI design really is all about
now what we need is a time equivalent for deciding what objects are presented to the user
not everything at once, but a subset of the KB
and how that subset's boundaries change (expand/contract buttons, etc)

The Connection Between a Physical World and GUI

SpaceGraph-C 3D Interface (at www.youtube.com)

So, the physical objects are not really objects... what are they supposed to be?

reprsentations of data in the system
a button represents a possible action that can be invoked, a text represents a string, etc


Such as... a file? a document?

some data objects will have multiple reprsenations. then a meta-representation allows the user to select amongst them, or to adjust properties of the visualization.  ex: color, size etc


And they "feel" like physical objects... is that the idea?

the larger idea is that we can instantiate virtual agents in the space, making it self-aware as a cybernetic system

it traces optic rays.  mouse pointers are just a kind of light.
and the retina of virtual agents floating around can see the space from inside it.

a virtual agent is an embodiment of a program.  It's got to have algorithms - with inputs and outputs that change in realtime... otherwise it would seem dead or sleeping.

How can you represent algorithms in physical space?


text, graph vis, data flow, class hierarchy.  anything and everything imaginable

Or, maybe one algorithm is a box or cube-like thing?

sure an algorithm could be a cube which when clicked, explodes into a program tree
or the reason why something happens can have its proof tree attached semi-transparently

Oh I see... very ambitious idea... but inevitable

yes its the boundary between artificial intelligence and 'artificial life' fields

programming would consist of text-entry and drawing lines between things
wires, or rope - which can also be physically modeled
and have other objects attached to them (reification)

text is just another input modality. you would be free to use text-only
however you could still use some of the navigation possibilities while entering text
like advanced consoles

A New Form of Communication

i think its possible that a new form of communication can be invented
based on fractal language
which isnt content with just character-based text
but where size, position, etc matter
relative to other words
for example i could implement a zooming file-tree navigator

Perhaps that is what KB engineering is able
What you deal with is not programs, but KB items

well ultimately, all of these constructs can be encoded logically
perhaps learnable, from example or demonstration
"this rectangle is called a window. its x in the upper right corner that when clicked, closes it."
it will be interesting when the AI starts to generate new programs, surprising us
and find new ways of unifying different UI models
unifying or abstracting them
this will also allow the GUI to self-transform
according to human preferences

Inventing KB items is equivalent to generating new programs

once the system can be programmed from within itself, it will be fully reflective
thats the point where netbeans isnt necessary. in fact most programs we use will be generalized into this system's functionality

all forms of content creation and editing, communication, programming, information navigating, etc
like the construct in the matrix



also see: http://www.youtube.com/watch?v=rL-aP8HBIDk The Matrix Construct: The Cosmic Playground



this could fruitfully applied to KB engineering
We definitely need a cool GUI for manipulating the KB in Genifer

Where Did This Come From?

using a logic engine to learn OpenGL drawing commands
and interpret GUI input
a re-interpretation of the MVC pattern

correct MVC could apply to anything, console or even voice-only
OpenGL is just the canvas to draw on
OpenGL provides a complete set of drawing and FX primitives
it could also be HTML
also ive included WebGL which is soon to become standard
its in the beta versions of firefox, chrome and prolly some others


The Construct: Self-Transforming Virtual Realities

A Discussion Between SeH and YKY, Edited for Blog Format

“I have no idea what you're talking about except that it sounds TOTALLY awesome.” --FreeOne3000

Redefining our Understanding of Graphical User Interfaces (GUI)




An Artificial Intelligence GUI: AI operating at the most fundamental drawing and input levels
a GUI that could potentialy predict user's action
at the resolution of mouse movements
That would be like... a tactile AGI

Fluxus - Livecoding Environment

Binds LISP to OpenGL graphics


Using AI to do tactile or physical reasoning
Which is ok... but textual reasoning is an entire different realm
sure theres multiple models that we could insert in-between. like a physics engine
i mean a pre-made physics engine, like Bullet
physics has the inherent feature of preventing two solid objects from consuming the same points in space

which is what graph layout and GUI design really is all about
now what we need is a time equivalent for deciding what objects are presented to the user
not everything at once, but a subset of the KB
and how that subset's boundaries change (expand/contract buttons, etc)

The Connection Between a Physical World and GUI

SpaceGraph-C 3D Interface (at www.youtube.com)

So, the physical objects are not really objects... what are they supposed to be?

reprsentations of data in the system
a button represents a possible action that can be invoked, a text represents a string, etc


Such as... a file? a document?

some data objects will have multiple reprsenations. then a meta-representation allows the user to select amongst them, or to adjust properties of the visualization.  ex: color, size etc


And they "feel" like physical objects... is that the idea?

the larger idea is that we can instantiate virtual agents in the space, making it self-aware as a cybernetic system

it traces optic rays.  mouse pointers are just a kind of light.
and the retina of virtual agents floating around can see the space from inside it.

a virtual agent is an embodiment of a program.  It's got to have algorithms - with inputs and outputs that change in realtime... otherwise it would seem dead or sleeping.

How can you represent algorithms in physical space?


text, graph vis, data flow, class hierarchy.  anything and everything imaginable

Or, maybe one algorithm is a box or cube-like thing?

sure an algorithm could be a cube which when clicked, explodes into a program tree
or the reason why something happens can have its proof tree attached semi-transparently

Oh I see... very ambitious idea... but inevitable

yes its the boundary between artificial intelligence and 'artificial life' fields

programming would consist of text-entry and drawing lines between things
wires, or rope - which can also be physically modeled
and have other objects attached to them (reification)

text is just another input modality. you would be free to use text-only
however you could still use some of the navigation possibilities while entering text
like advanced consoles

A New Form of Communication

i think its possible that a new form of communication can be invented
based on fractal language
which isnt content with just character-based text
but where size, position, etc matter
relative to other words
for example i could implement a zooming file-tree navigator

Perhaps that is what KB engineering is able
What you deal with is not programs, but KB items

well ultimately, all of these constructs can be encoded logically
perhaps learnable, from example or demonstration
"this rectangle is called a window. its x in the upper right corner that when clicked, closes it."
it will be interesting when the AI starts to generate new programs, surprising us
and find new ways of unifying different UI models
unifying or abstracting them
this will also allow the GUI to self-transform
according to human preferences

Inventing KB items is equivalent to generating new programs

once the system can be programmed from within itself, it will be fully reflective
thats the point where netbeans isnt necessary. in fact most programs we use will be generalized into this system's functionality

all forms of content creation and editing, communication, programming, information navigating, etc
like the construct in the matrix



also see: http://www.youtube.com/watch?v=rL-aP8HBIDk The Matrix Construct: The Cosmic Playground



this could fruitfully applied to KB engineering
We definitely need a cool GUI for manipulating the KB in Genifer

Where Did This Come From?

using a logic engine to learn OpenGL drawing commands
and interpret GUI input
a re-interpretation of the MVC pattern

correct MVC could apply to anything, console or even voice-only
OpenGL is just the canvas to draw on
OpenGL provides a complete set of drawing and FX primitives
it could also be HTML
also ive included WebGL which is soon to become standard
its in the beta versions of firefox, chrome and prolly some others


My Riddles

Dear Antz Particleion Is Hacking your Universe (live)

I will give your universe/Mind back to you if you answer my riddles.

Call your answers in!

(305) 735-9490

A) Is your universe real?

B) Are you real?

C) Who currently has {source}?

D) What is {Root}?

When you got the answer email it to

Key.universe@gmail.com

and I will give you back your universe assuming your right ;-)

Rules subject to change but will be posted.

`

! It will be Billions of years till I let you just have it... Till then I urge you try to get your key back.