Components like timeline, glossary, issues and even dynamic-view have similar structures. They could be abstracted away to a single parametrizable component.
Components like `timeline`, `glossary`, `issues` and even `dynamic-view` have similar structures. They could be abstracted away to a single parametrizable component.
Note that screenstack itself follow that pattern even if it's not text. The layout itself can be done, as explored before, directly with e.g D3 or Cytoscape, or pre-computed, e.g Gephi.
It, being able to explore and improve the PIM from a wiki, is actually the main motivation of this project.
Consider different visualizations, e.g :
web-url, most demanding but most up to date and (partly) interactable (outside of VR)
screencast, less demanding (rendered server side) but static and also partial (no iframe, no video, etc)
"just" page name, very light but also very limited
Note that `screenstack` itself follow that pattern even if it's not text. The layout itself can be done, as explored before, directly with e.g D3 or Cytoscape, or pre-computed, e.g Gephi.
It, being able to explore and improve the PIM from a wiki, is actually **the main motivation of this project**.
Consider different visualizations, e.g :
1. `web-url`, most demanding but most up to date and (partly) interactable (outside of VR)
1. `screencast`, less demanding (rendered server side) but static and also partial (no iframe, no video, etc)
1. "just" page name, very light but also very limited
Depending on different criteria zoom-level dependant visualization (as touched on in https://observablehq.com/@utopiah/d3-pim-graph#cell-1183 ), or ZUI, could be interesting to explore.
Consider a set of shortcuts, e.g on wrist, per layout type.
For example while trying to interact (to define, could be highlight when within a certain distance) with content of a graph shortcuts could be related to e.g edge.target() and edge.source() (cf https://js.cytoscape.org/#collection/traversing )
Consider a set of shortcuts, e.g on wrist, per layout type.
For example while trying to interact (to define, could be highlight when within a certain distance) with content of a graph shortcuts could be related to e.g `edge.target()` and `edge.source()` (cf https://js.cytoscape.org/#collection/traversing )
Keeping in mind already defined MIME types and `Content-Type` which might be too low-level. Also the recently encountered the DataTransferItem interface https://twitter.com/utopiah/status/1569929593028546560
See also https://fabien.benetou.fr/Analysis/LibrarianMoveWalls on the specifics of mapping data structure and explorable and memorable place with scaling constraints.
This also means that a set of jxr snippets would be displayed alongside the item of a type, e.g for a graph each node would have jxr snippets on highlight to toggle the visibility of connecteds nodes, for an edge graph one of such snippets pinching on it would teleport to the connect node.
Overall jxr snippets per type should be stored on dedicated page to be composable.
This also means that a set of jxr snippets would be displayed alongside the item of a type, e.g for a graph each node would have jxr snippets on highlight to toggle the visibility of connecteds nodes, for an edge graph one of such snippets pinching on it would teleport to the connect node.
Overall jxr snippets per type should be stored on dedicated page to be composable.
A trivial way to test this would be to react on a cyclic event a heart 3D model which change scale (i.e beats) every second, or something more specific applied within a model itself, a clock where the hands change their orientation based on the actual time of the user loading it.
Consider how `gltf-jxr` https://git.benetou.fr/utopiah/text-code-xr-engine/src/branch/gltf-jxr namely `jxr` code direclty in a glTF model could do so, i.e change its shape or behavior based on a live data source.
A trivial way to test this would be to react on a cyclic event a heart 3D model which change scale (i.e beats) every second, or something more specific applied within a model itself, a clock where the hands change their orientation based on the actual time of the user loading it.
Components like
timeline
,glossary
,issues
and evendynamic-view
have similar structures. They could be abstracted away to a single parametrizable component.Note that
screenstack
itself follow that pattern even if it's not text. The layout itself can be done, as explored before, directly with e.g D3 or Cytoscape, or pre-computed, e.g Gephi.It, being able to explore and improve the PIM from a wiki, is actually the main motivation of this project.
Consider different visualizations, e.g :
web-url
, most demanding but most up to date and (partly) interactable (outside of VR)screencast
, less demanding (rendered server side) but static and also partial (no iframe, no video, etc)Depending on different criteria zoom-level dependant visualization (as touched on in https://observablehq.com/@utopiah/d3-pim-graph#cell-1183 ), or ZUI, could be interesting to explore.
Consider a set of shortcuts, e.g on wrist, per layout type.
For example while trying to interact (to define, could be highlight when within a certain distance) with content of a graph shortcuts could be related to e.g
edge.target()
andedge.source()
(cf https://js.cytoscape.org/#collection/traversing )Keeping in mind already defined MIME types and
Content-Type
which might be too low-level. Also the recently encountered the DataTransferItem interface https://twitter.com/utopiah/status/1569929593028546560See also https://fabien.benetou.fr/Analysis/LibrarianMoveWalls on the specifics of mapping data structure and explorable and memorable place with scaling constraints.
This also means that a set of jxr snippets would be displayed alongside the item of a type, e.g for a graph each node would have jxr snippets on highlight to toggle the visibility of connecteds nodes, for an edge graph one of such snippets pinching on it would teleport to the connect node.
Overall jxr snippets per type should be stored on dedicated page to be composable.
See also WebXR, even AFrame dataviz components, e.g BabiaXR https://gitlab.com/babiaxr/aframe-babia-components
Consider how
gltf-jxr
https://git.benetou.fr/utopiah/text-code-xr-engine/src/branch/gltf-jxr namelyjxr
code direclty in a glTF model could do so, i.e change its shape or behavior based on a live data source.A trivial way to test this would be to react on a cyclic event a heart 3D model which change scale (i.e beats) every second, or something more specific applied within a model itself, a clock where the hands change their orientation based on the actual time of the user loading it.