There is a lot valid concern on accessibility and abuse this could result in, but it think it's important to see the other side of the argument.
There was a really good thread on Twitter a couple of days ago:
> In light of recent Figma news, lemme reiterate that of all the goods that can happen to the web, 90% of them can't happen due to not having access to font rendering & metrics in JS
> t’s kind of crazy that a platform specifically designed for presenting text doesn’t provide functionality to manipulate text at a detail level
> Brute forcing text measurement in tldraw breaks my heart
Love it or hate it, the web is a platform for application development, making this easer is only good for everyone.
My argument on web APIs is what we should continue to go lower level, and so font and text metrics APIs for canvas would be awesome and an alternative to this. But I'm also a proponent of "using the platform" and for text layout, web engines are incredible, and very performant. Extending that capability to layout inside a canvas enables many awesome features.
One that I've repeatedly gone back to over the years is paginated rich text editing. It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine. This proposal would enable full use of contenteditable for rich text, but with full page/print layout control.
> of all the goods that can happen to the web, 90% of them can't happen due to not having access to font rendering & metrics in JS
I’d be interested to see a representative excerpt of this person’s “goods that can happen to the web”, because it sounds pretty ridiculous to me. Not much needs that stuff, and a lot of that stuff is exposed in JS these days, and a lot of the rest you can work around it without it being ruinous to performance.
It’s also pretty irrelevant here (that is, about HTML-in-Canvas): allowing drawing HTML to canvas doesn’t shift the needle in these areas at all.
> One that I've repeatedly gone back to over the years is paginated rich text editing. It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine.
If I get it right, every glyph used from the given font is rendered once as a SVG path (upside down! huh!), and then the whole page is a single huge SVG element in which every typed character is a <use> with a reference to that rendered glyph, translated with a CSS transform to the right place (i assume these coordinates come out of HarfBuzz?). Kinda mad that you had to redo 90% of the browser that way but the result is pretty impressive!
I'm curious why you render the glyphs to paths and not have the browser render those directly using eg svg <text> elements?
Was it hard to get this to work cross browser?
ps. srsly I love this about the web. You're doing this amazing engineering feat and I can just pop the trunk and learn all about it. Obviously feel free to not answer anything that's deemed a trade secret, I'm just geeking out hard on this thing :-) :-)
I don't mean HTML text nodes, I mean still the single big SVG like they do now, but with SVG <text> elements instead of <path> elements. They do know (I suppose) how much space that element would take since they're asking HarfBuzz to tell them.
And, I think we've come full circle. I'm pretty sure that's how I was rendering text for the online office suite[] I wrote in ~1998 -- a Java Applet embedded in the browser.
[] VCs: "We're not investing in this crap! No company in their right mind would store their precious, confidential documents on the Internet!"
Why would you want world's least performant layout/UI engine infect canvas? This literally just cements the situation you quote about having no access to good APIs.
> It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine. This proposal would enable full use of contenteditable for rich text, but with full page/print layout control.
Why would it enable contenteditable for rich text if you yourself are saying that it doesn't work, and Google had to implement its own engine?
but then what's the point of the canvas here? Unless if it was possible to mix and match canvas painting operations seamlessly with the declared elements...
This post is titled HTML-in-Canvas, so you can find the point in the link. A lot of people just want the freedom of canvas rendering/shading and the flexibility of HTML/CSS. Current options may force you to create a layout engine from scratch for example.
canvas first sites suck. They can't use any system services as it would all be a privacy issue. They can't use the system dictionary for correction since to do so they'd need the contents of the dictionary or at least a way to query user customized corrections. Similarly they can't offer the system level accessibility but end up having to roll their own in which case, every app that uses canvas has a completely different UI.
What if you want an HTML-first page with a canvas in it, but then you realize you want some layout/styling for the text within the canvas? Seems unnecessary to propagate that situation up to the type of top-level page.
And then what if you realize you need a canvas-in-the-html-in-the-canvas? It's endless. Canvas-first makes sense, it's basically how it works everywhere outside of the web. Start with the smallest abstractions and build on them (html on canvas) rather than leave escape hatches to your big abstractions because they fail to cover every use cases (canvas in html).
If you support the DOM and hitscan, then it doesn't matter. You can red pill Ouroboros yourself all day and not care. Every element a canvas, every raindrop an ocean.
Wait, is that never going to happen? I was so excited when WASM was first announced, but then lack of DOM access killed it for me. It was supposed to allow us to use any language instead of just JS.
You can access the DOM from WASM just fine, you just have to go through a JS shim because the DOM is a Javascript API (just like WebGL, WebGPU, WebAudio and any other API available in browsers).
In most DOM access libraries (like https://github.com/web-dom/web-dom) this Javascript shim exists but is completely invisible to the library user (e.g. it looks and feels as if WASM would have direct DOM access).
Why this topic is always brought up I really have no idea, at this point it feels like trolling attempts because from a technical point of view 'direct DOM access from WASM" simply doesn't make a lot of sense. Accessing web APIs from WASM is an FFI scenario, no matter how you look at it.
Actually DOM implementations are all in C++ and DOM interfaces are described in WebIDL. So direct DOM access from WASM is indeed possible if browser vendors chose to open the same. Access via a JS shim is just utterly destroying performance - orders of magnitude worse than mere FFI - and all the trolling attempts are the one pretending otherwise.
Browser vendors can't simply "choose" to open direct DOM access to WASM.
When defining standardized DOM APIs in WebIDL, WebIDL assumes that you can use JavaScript strings, JavaScript objects + properties, JavaScript Exceptions, JavaScript Promises, JavaScript garbage collection, and on and on and on. Almost all of the specification of WebIDL itself is about the dozens of types that it assumes the platform already provides. https://webidl.spec.whatwg.org/
WebAssembly doesn’t have any of those things. As a low-level VM, it supports only modules, functions, bytes, numbers (32-bit and 64-bit integers and floats), arrays (called “tables”), and opaque pointers (“reference types”).
No one has ever standardized a DOM API for low-level languages. You’d presumably need to start by defining a new low-level WebIDL design language just to define a low-level DOM API.
Defining WebIDL itself has taken decades.
Today, the browser vendors aren’t convinced that a new low-level DOM API is worth their time. It’s better to make existing JS web apps faster than it is to begin a multi-year (multi-decade?) project to make a new thing possible that could be better in the long run.
There is the Web Assembly Component Model. Nothing is really preventing browser vendors from exposing a WASM host interface to DOM, exposing it as a Component Model interface. This would allow DOM functions to be invoked from WASM without hand-written/generated JS glue code.
Nobody is really calling for exposing the full-suite of WebAPI's. But basic DOM access allowing manipulation of page elements would be immediately leveraged by all the WASM-UI frameworks available today. Framework authors would throw out all the generated JS glue code which adds painful overhead pronto with great joy.
Tbh, the WASM Component Model is first and foremost an overengineered mess which probably will add more overhead than a handwritten JS shim just because it is so complex.
In the end you'll need to marshall datatypes from one language into another, and that is already a mess between 'native' languages (e.g. a C++ std::string is something entirely different than a Rust or Kotlin String).
So in that hypothetical native WASM DOM API, how do you pass something as simple as a string? Let's say the obvious solution would be a ptr/length pair, but then, what encoding UTF-8? UTF-16? UTF-32? No matter what the solution is, you won't find a data representation that directly matches the string representation in all the languages that compile to WASM, so you'll need to do marshalling anyway before calling that hypothetical WASM DOM API.
And suddenly the current 'low-tech' solution of letting a JS shim extract the string data from the WASM heap and build a JS string before calling into a web API suddenly doesn't look so terrible anymore.
A much more impactful change would be to add more WASM-friendly entry points to web APIs.
For instance there's no reason that WebGPU is so 'Javascript object heavy' or uses strings as enum values except that this is common in other Javascript APIs. If WebGPU had additional "WASM-friendly" functions which use plain numbers (as object handles or enum values) a lot of the marshalling overhead when being called from WASM would simply go away.
Where does SVG's `foreignObject` fit into this? It seems that SVG supports all of thelproposal already? As is evidenced by projects like https://github.com/zumerlab/snapdom that can take "screenshots" of the webpage by copying the DOM with inlined styles into a `foreignObject` tag in an SVG. Then of course that SVG can be rendered to a canvas.
This proposal is a lot like an easier way to draw foreign object into canvas. This proposal supports new features too, such as updating the canvas when the content changes, and interactivity.
Please correct me if I'm wrong, but I feel rendering html overtop of canvas solves this with vanilla just fine. Canvas is for rendering things you can't with html, and not replacement for the dom.
Here's a simple example that's currently very hard to do and requires all kinds of hacky and unsatisfying workarounds:
1. A 3d model, say of a statue in a museum
2. Add annotations to the model drawing attention to specific features (especially if the annotations are not just a single word or number)
If you want the annotations to be properly occluded by the model as you move the camera around, it's hard - you can't use HTML. If you do use HTML, you'll have to do complex calculations to make it match the correct place in the 3d scene, and it will always be a frame delayed, and occlusion is bad - usually just show or hide the entire HTML annotation based on the bounding box of the 3d model (I have seen better solutions but they took a ton of work).
So you could use 3d text, maybe SDF, but now you've created a entire text rendering system without accessibility or anything like that. Also, if you want anything more than very simple annotations (for example, videos, lists, select menus, whatever) you either have to reinvent them or fall back HTML.
That only works if the html stuff is on top of everything that's rendered in the canvas, otherwise you need to add another canvas on top of the html (etc pp for each separate z-layer).
IMHO this step finally starts to fix the "inverted api layer stack" in browsers. All browser rendering should build on top of a universal canvas api.
It should already work if the nested canvas uses the same approach. It's not cyclic, though. To make cyclic canvases work, you need to manually draw the parent canvas to a nested canvas.
I support this, as odd as it is. There’s times when you’re needing something drawn but can easily reuse an html element from elsewhere. Previously you’d have to render that to a bitmap offscreen and then copy that to a full screen quad or draw it on the canvas. Up until recently, even if you tried to z-index elements with position absolute it would be visually overwritten by the canvas (I think this is mostly fixed though).
I don’t know if this is the best solution but it’s better than previous hacks. IF you need to go that route. Basically html2canvas.
There is a real problem using canvas to replace HTML.
Not all but most HTML. I have not found a good solution for the issue of doing something like MDX in canvas. I have tried SDF, looked at 2D canvas Text, Troika, MSDF. You can get text, it is just that laying it out is very difficult. React three drei has the ability to put HTML into the threejs ecosystem, but there are issues about CSS and text that make that impractical.
For me the use case is very simple. I would like to take an MDX file and show it in a mesh. Laid out. Maybe I am missing something because I am new to the whole threejs thing, but I really tried.
This shows it can be done, I gave up trying to reproduce it in React-three-fiber.
Why? Personally, I think the use of 3D graphics produces an interface for users that is an order or magnitude better for users. The real question (and an interesting one to consider) is why are we still building HTML first websites?
I read the title and said "shut the fuck up, don't do that." but then I read the rationale and it's fair. It's true there is no layout engine inside canvas, and that is a pain, but I'm not sure it's such a pain as to invite this recursive hell.
One of the more senior engineers I worked with told me: "Every real-life data structure I encountered was tree-like".
It would be easiest to just ask the browser to render a fragment of HTML onto a canvas, or onto some invisible bitmap, like you can with most other UI toolkits.
They would never do this because of fingerprinting, which is already the cause of most of the reasons we cannot 'just' do a lot of things, unfortunately.
E: And the infamous other half: malware. A bit over a decade ago malware devs started using canvas to do things like hide fragments inside of bitmap data in seemingly harmless ads and then a second script would extract and assemble it to evade detection.
The web platform can already do this, see SVG foreignObject elsewhere in the thread. The key is to have the proper bounds in place (cross origin resources, etc), and the infrastructure for that is already in place.
This just removes the extra step of relying on SVG to accomplish rendering the HTML, adds a path for getting this content into the accessibility tree, and supporting input on the rendered elements.
Yeah, that's already available in Firefox for chrome/extensions, but not allowed for the web due to fingerprinting and other security risks. For example, rendering an iframe of your bank account…
It's not a browser engine in a browser engine, just making the already existing browser engine available in another context. I bet that at least 90% of the dom implementation code will be shared (since internally the dom is almost certainly rendered through the same renderer process that also runs webgl and webgpu.
The whole point of canvas is to get away from the awful kludge that is HTML and CSS. I'd much rather see a new simple UI library that's developed for canvas.
45kb gzipped is pretty beefy but incredibly small when you consider just what it takes to make this work today. If I understand correctly, it’s basically a DOM and CSS renderer.
Having this type of control, for certain use cases can be perfectly valid.
It also feels Flash like.
The javascriptists began on a journey 15 years ago to replace Flash. Things have gotten more complicated before becoming simpler, but maybe things will head in a direction soon.
Flash itself was actionscript (ECMAScript) which is the same syntax as Javascript.
I would love this. I have to do disgusting hacks to get an embedded browser window in my metaverse (https://substrata.info/) that uses webgl.
The disgusting hack is to render the browser window behind the main webgl canvas, and then punch a hole through the webgl canvas with zero alpha. Event handling (mouse, keyboard) is also a total pain.
Please excuse me for spamming this thread with examples of how my canvas library approaches these issues:
> Use case: Styled, Laid Out Content in Canvas. There’s a strong need for better styled text support in Canvas. Examples include chart components (legend, axes, etc.), rich content boxes in creative tools, and in-game menus.
Single line, unstyled text is relatively easy using the Canvas API. Multiline text is a world-of-pain. Styled text is a completely separate world-of-pain. Underlined text? Same! So that's gives us a problem space of world-of-pain-cubed. Don't talk to me about RTL text, vertical text, CJK punctuation, Thai text ignoring spaces as a word separator, heavily kerned fonts (staring at you, Arabic and Devangari), etc.
> Use case: Accessibility Improvements. There is currently no guarantee that the canvas fallback content currently used for <canvas> accessibility always matches the rendered content, and such fallback content can be hard to generate. With this API, elements drawn into the canvas bitmap will match their corresponding canvas fallback.
I welcome and applaud this focus on making canvas text accessible. However it's not enough (in my highly opinionated opinion) to just reflect the text back into the DOM. People using screen readers probably don't need every number on the Y axis read out to them every time they navigate onto a chart, they probably just need to hear values as they keyboard navigate the chart.
> Use case: Composing HTML Elements with Shaders. A limited set of CSS shaders, such as filter effects, are already available, but there is a desire to use general WebGL shaders with HTML.
We already have a comprehensive set of filter effects available through SVG <filter> elements. They are, however, difficult to compose and have a tendancy to be computationally heavy. WebGL shaders can be fantastic, but face the (current) limit of how many WebGL canvas elements you can include on a page; they're also difficult to compose.
> Use case: HTML Rendering in a 3D Context. 3D aspects of sites and games need to render rich 2D content into surfaces within a 3D scene.
HTML canvas elements are just DOM elements, and can be 3d-rotated like other elements. Interacting with 3d-rotated canvas elements is an interesting problem space.
This would make the entire visible page into a canvas-like drawing surface which also renders DOM elements as per usual. At some level there's a process which rasterizes the DOM - opening drawing APIs into that might be a better solution.
It's sort of the same thing as HTML in canvas conceptually, but architecturally it makes DOM rendering and canvas rendering overlapping equals with awareness going both ways. E.g., a line drawn on the page will cause the DOM elements to reflow unless told to ignore it.
Isn't this already trivial? You just need to be aware that the framebuffer size of a canvas element is different from its dom element size, but you can easily glue the canvas framebuffer size to the element size by listening for resize events (this is for webgl and webgpu canvases, don't know about 2d canvas)
Accessibility is a key reason for this proposal. Today, canvas accessibility is quite limited. This proposal enables the browser to know how accessible dom elements map to canvas pixels.
I've never understood why they couldn't have just used the existing 3D CSS for WebXR. All the data is there, all they need to do is render the DOM from 2 POVs, one for each eye. They could even have had some standard to let it auto composite with WebGL.
There was a really good thread on Twitter a couple of days ago:
> In light of recent Figma news, lemme reiterate that of all the goods that can happen to the web, 90% of them can't happen due to not having access to font rendering & metrics in JS
https://x.com/_chenglou/status/1951481453046538493
And a few choice replies:
> t’s kind of crazy that a platform specifically designed for presenting text doesn’t provide functionality to manipulate text at a detail level
> Brute forcing text measurement in tldraw breaks my heart
Love it or hate it, the web is a platform for application development, making this easer is only good for everyone.
My argument on web APIs is what we should continue to go lower level, and so font and text metrics APIs for canvas would be awesome and an alternative to this. But I'm also a proponent of "using the platform" and for text layout, web engines are incredible, and very performant. Extending that capability to layout inside a canvas enables many awesome features.
One that I've repeatedly gone back to over the years is paginated rich text editing. It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine. This proposal would enable full use of contenteditable for rich text, but with full page/print layout control.
I hope it lands in the browsers.
I’d be interested to see a representative excerpt of this person’s “goods that can happen to the web”, because it sounds pretty ridiculous to me. Not much needs that stuff, and a lot of that stuff is exposed in JS these days, and a lot of the rest you can work around it without it being ruinous to performance.
It’s also pretty irrelevant here (that is, about HTML-in-Canvas): allowing drawing HTML to canvas doesn’t shift the needle in these areas at all.
100% of my concern about the Web is about privacy and security... and why they don't happen.
As do we at Nutrient, we use Harfbuzz in WASM plus our own layouting - see the demo here: https://document-authoring-demo.nutrient.io/
Getting APIs for that into the Platform would make life significantly easier, but thanks to WASM it’s not a total showstopper.
Btw, I saw you’re working on sync at ElectricSQL - say hi to Oleksii :)
If I get it right, every glyph used from the given font is rendered once as a SVG path (upside down! huh!), and then the whole page is a single huge SVG element in which every typed character is a <use> with a reference to that rendered glyph, translated with a CSS transform to the right place (i assume these coordinates come out of HarfBuzz?). Kinda mad that you had to redo 90% of the browser that way but the result is pretty impressive!
I'm curious why you render the glyphs to paths and not have the browser render those directly using eg svg <text> elements?
Was it hard to get this to work cross browser?
ps. srsly I love this about the web. You're doing this amazing engineering feat and I can just pop the trunk and learn all about it. Obviously feel free to not answer anything that's deemed a trade secret, I'm just geeking out hard on this thing :-) :-)
[] VCs: "We're not investing in this crap! No company in their right mind would store their precious, confidential documents on the Internet!"
Why would you want world's least performant layout/UI engine infect canvas? This literally just cements the situation you quote about having no access to good APIs.
A reminder that Figma had to "create a browser inside a browser" to work around DOM limitations: https://www.figma.com/blog/building-a-professional-design-to...
> It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine. This proposal would enable full use of contenteditable for rich text, but with full page/print layout control.
Why would it enable contenteditable for rich text if you yourself are saying that it doesn't work, and Google had to implement its own engine?
To make it make sense in my opinion canvas should already be a first class format for web browsers, so it doesn't have to be inside a HTML.
Then we would have a choice of HTML-first page with canvas elements in it, or a canvas-first page with HTML elements in it.
But what do I know.
If you have a canvas-first page, where do you store the title? Right, in <title>, so
In reality they should really just allow content in the canvas element and call it a day:It's kind of different because SVG and HTML are both XML-like text-based format, doesn't feel that wrong to mix them together. Unlike with canvas..
In most DOM access libraries (like https://github.com/web-dom/web-dom) this Javascript shim exists but is completely invisible to the library user (e.g. it looks and feels as if WASM would have direct DOM access).
Why this topic is always brought up I really have no idea, at this point it feels like trolling attempts because from a technical point of view 'direct DOM access from WASM" simply doesn't make a lot of sense. Accessing web APIs from WASM is an FFI scenario, no matter how you look at it.
When defining standardized DOM APIs in WebIDL, WebIDL assumes that you can use JavaScript strings, JavaScript objects + properties, JavaScript Exceptions, JavaScript Promises, JavaScript garbage collection, and on and on and on. Almost all of the specification of WebIDL itself is about the dozens of types that it assumes the platform already provides. https://webidl.spec.whatwg.org/
WebAssembly doesn’t have any of those things. As a low-level VM, it supports only modules, functions, bytes, numbers (32-bit and 64-bit integers and floats), arrays (called “tables”), and opaque pointers (“reference types”).
No one has ever standardized a DOM API for low-level languages. You’d presumably need to start by defining a new low-level WebIDL design language just to define a low-level DOM API.
Defining WebIDL itself has taken decades.
Today, the browser vendors aren’t convinced that a new low-level DOM API is worth their time. It’s better to make existing JS web apps faster than it is to begin a multi-year (multi-decade?) project to make a new thing possible that could be better in the long run.
Nobody is really calling for exposing the full-suite of WebAPI's. But basic DOM access allowing manipulation of page elements would be immediately leveraged by all the WASM-UI frameworks available today. Framework authors would throw out all the generated JS glue code which adds painful overhead pronto with great joy.
In the end you'll need to marshall datatypes from one language into another, and that is already a mess between 'native' languages (e.g. a C++ std::string is something entirely different than a Rust or Kotlin String).
So in that hypothetical native WASM DOM API, how do you pass something as simple as a string? Let's say the obvious solution would be a ptr/length pair, but then, what encoding UTF-8? UTF-16? UTF-32? No matter what the solution is, you won't find a data representation that directly matches the string representation in all the languages that compile to WASM, so you'll need to do marshalling anyway before calling that hypothetical WASM DOM API.
And suddenly the current 'low-tech' solution of letting a JS shim extract the string data from the WASM heap and build a JS string before calling into a web API suddenly doesn't look so terrible anymore.
A much more impactful change would be to add more WASM-friendly entry points to web APIs.
For instance there's no reason that WebGPU is so 'Javascript object heavy' or uses strings as enum values except that this is common in other Javascript APIs. If WebGPU had additional "WASM-friendly" functions which use plain numbers (as object handles or enum values) a lot of the marshalling overhead when being called from WASM would simply go away.
[1]: https://www.destroyallsoftware.com/talks/the-birth-and-death...
> TODO: Expand on fingerprinting risks
1. A 3d model, say of a statue in a museum
2. Add annotations to the model drawing attention to specific features (especially if the annotations are not just a single word or number)
If you want the annotations to be properly occluded by the model as you move the camera around, it's hard - you can't use HTML. If you do use HTML, you'll have to do complex calculations to make it match the correct place in the 3d scene, and it will always be a frame delayed, and occlusion is bad - usually just show or hide the entire HTML annotation based on the bounding box of the 3d model (I have seen better solutions but they took a ton of work).
So you could use 3d text, maybe SDF, but now you've created a entire text rendering system without accessibility or anything like that. Also, if you want anything more than very simple annotations (for example, videos, lists, select menus, whatever) you either have to reinvent them or fall back HTML.
IMHO this step finally starts to fix the "inverted api layer stack" in browsers. All browser rendering should build on top of a universal canvas api.
I don’t know if this is the best solution but it’s better than previous hacks. IF you need to go that route. Basically html2canvas.
Not all but most HTML. I have not found a good solution for the issue of doing something like MDX in canvas. I have tried SDF, looked at 2D canvas Text, Troika, MSDF. You can get text, it is just that laying it out is very difficult. React three drei has the ability to put HTML into the threejs ecosystem, but there are issues about CSS and text that make that impractical.
For me the use case is very simple. I would like to take an MDX file and show it in a mesh. Laid out. Maybe I am missing something because I am new to the whole threejs thing, but I really tried.
A good article about text https://css-tricks.com/techniques-for-rendering-text-with-we...
And an example from the above article: https://codesandbox.io/p/sandbox/css-tricks-msdf-text-fks8w
This shows it can be done, I gave up trying to reproduce it in React-three-fiber.
Why? Personally, I think the use of 3D graphics produces an interface for users that is an order or magnitude better for users. The real question (and an interesting one to consider) is why are we still building HTML first websites?
It would be easiest to just ask the browser to render a fragment of HTML onto a canvas, or onto some invisible bitmap, like you can with most other UI toolkits.
E: And the infamous other half: malware. A bit over a decade ago malware devs started using canvas to do things like hide fragments inside of bitmap data in seemingly harmless ads and then a second script would extract and assemble it to evade detection.
This just removes the extra step of relying on SVG to accomplish rendering the HTML, adds a path for getting this content into the accessibility tree, and supporting input on the rendered elements.
https://searchfox.org/mozilla-central/rev/f691af5143ebd97034...
I don't understand what the takeaway is here. Is that surprising? Is it not? What does "real-life" mean?
This is an exaggeration, of course.
What does this even mean? Is a hash map "tree-like" somehow? Or is a hash map just a toy data structure with no real-life use cases?
1. By painting on it using Canvas/Graphics API:
Where _painter_ is a function used for paining on the image surface using Canvas/Graphics reference.2. By making snapshot of the existing DOM element:
Such images can be used in DOM, rendered by other Canvas/Graphics as also in WebGL as textures.See: https://docs.sciter.com/docs/Graphics/Image#constructor
Nothing such is available.
Also unless it has the same feature as the level of accessibility it has no it would be a step back.
It would be a gargantuan job.
Meaning, no way, just for the security aspect.
Having this type of control, for certain use cases can be perfectly valid.
It also feels Flash like.
The javascriptists began on a journey 15 years ago to replace Flash. Things have gotten more complicated before becoming simpler, but maybe things will head in a direction soon.
Flash itself was actionscript (ECMAScript) which is the same syntax as Javascript.
It sounds like a crazy workaround for Flutter's strange architectural choices
> Use case: Styled, Laid Out Content in Canvas. There’s a strong need for better styled text support in Canvas. Examples include chart components (legend, axes, etc.), rich content boxes in creative tools, and in-game menus.
Single line, unstyled text is relatively easy using the Canvas API. Multiline text is a world-of-pain. Styled text is a completely separate world-of-pain. Underlined text? Same! So that's gives us a problem space of world-of-pain-cubed. Don't talk to me about RTL text, vertical text, CJK punctuation, Thai text ignoring spaces as a word separator, heavily kerned fonts (staring at you, Arabic and Devangari), etc.
Demo: https://scrawl-v8.rikweb.org.uk/demo/canvas-207.html
This demo takes the following html markup and displays it in a truncated circle shape. The styling itself happens in CSS - see here: https://github.com/KaliedaRik/Scrawl-canvas/blob/v8/demo/can...
As for the other things I don't want to talk about, see this other demo which attempts to overcome those issues: https://scrawl-v8.rikweb.org.uk/demo/canvas-206.html> Use case: Accessibility Improvements. There is currently no guarantee that the canvas fallback content currently used for <canvas> accessibility always matches the rendered content, and such fallback content can be hard to generate. With this API, elements drawn into the canvas bitmap will match their corresponding canvas fallback.
I welcome and applaud this focus on making canvas text accessible. However it's not enough (in my highly opinionated opinion) to just reflect the text back into the DOM. People using screen readers probably don't need every number on the Y axis read out to them every time they navigate onto a chart, they probably just need to hear values as they keyboard navigate the chart.
Demo: https://scrawl-v8.rikweb.org.uk/demo/modules-001.html
The canvas element is highly inaccessible - I've tried to detail all the issues that have to be addressed here (again, a highly opinionated take): https://scrawl-v8.rikweb.org.uk/docs/reference/sc-accessibil...
> Use case: Composing HTML Elements with Shaders. A limited set of CSS shaders, such as filter effects, are already available, but there is a desire to use general WebGL shaders with HTML.
We already have a comprehensive set of filter effects available through SVG <filter> elements. They are, however, difficult to compose and have a tendancy to be computationally heavy. WebGL shaders can be fantastic, but face the (current) limit of how many WebGL canvas elements you can include on a page; they're also difficult to compose.
For my library's filter engine, I took inspiration from the SVG approach. Details can be found here: https://scrawl-v8.rikweb.org.uk/docs/reference/sc-filter-eng...
> Use case: HTML Rendering in a 3D Context. 3D aspects of sites and games need to render rich 2D content into surfaces within a 3D scene.
HTML canvas elements are just DOM elements, and can be 3d-rotated like other elements. Interacting with 3d-rotated canvas elements is an interesting problem space.
Classic rotating cube demo: https://scrawl-v8.rikweb.org.uk/demo/dom-008.html
Tracking the mouse over a 3d-rotated canvas element demo: https://scrawl-v8.rikweb.org.uk/demo/dom-013.html
I wonder if the working groups are still run by that attitude.
It's sort of the same thing as HTML in canvas conceptually, but architecturally it makes DOM rendering and canvas rendering overlapping equals with awareness going both ways. E.g., a line drawn on the page will cause the DOM elements to reflow unless told to ignore it.
[0] https://github.com/erichocean/blossom
https://github.com/WICG/html-in-canvas
:P
Ah yes. Because HTML is renowned for its performance and quality.
Instead of pushing this idiocy they should add the things that canvas lacks instead