Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking windows Spout support #38

Open
jo-chemla opened this issue Aug 21, 2024 · 26 comments
Open

Tracking windows Spout support #38

jo-chemla opened this issue Aug 21, 2024 · 26 comments

Comments

@jo-chemla
Copy link
Contributor

jo-chemla commented Aug 21, 2024

Hi there, first of all thanks for this great addon! It would be great to support windows Spout receiver, since NDI receiver has a pretty low framerate. Also, spout sender discovery would be nice, as listed in your readme.

This issue is opened to track progress son this spout progress. Thanks!

Also adding a blender devtalk reference to your blender thread regarding gpu texture sharing/writing

@maybites
Copy link
Owner

The Spout Library doesn't support (yet) this feature. Until this is available there is no way I can implement it.

Plus: The performance would be as bad as with NDI because I have to go via Numpy Arrays and the CPU and thats just slow.

@jo-chemla
Copy link
Contributor Author

jo-chemla commented Aug 23, 2024

Thanks for your feedback. This thread was opened as a way for windows people to watch progress on these spout implementations, because your readme states that, at the moment, Windows Spout Receiver is not yet implemented, as well as Sender Discovery.

Regarding Spout Receiving, the UI could probably be updated to remove Spout from the Texture Receiving panel for the moment, while implementation is not complete - no way yet to tell Blender what is the name of the Spout source. I initially assumed, when exploring the plugin UI (not the github readme) that both Spout sending and receiving would work, + parsing through the source code, it seemed that the plugin implements both SpoutClient and SpoutServer based on the Python-SpoutGL library.

Two sidenote feedbacks:

  • the Spout Sender seems to have a better framerate than the NDI Sender - when viewed on NDI Studio Monitor compared to SpoutReceiver. I thought a different implementation in the texture copy might explain that but could not identify the root reason for this - also just saw how you do the numpy texture copy you mention for syphon here
  • and there seem to be an indexing but when multiple NDI servers are running on the computer and one is selected within the Receive Texture panel. When I have both NDI Test Patterns, Camera and NDI Monitor running, I have to loop through the blender list multiple times (and sometimes have to select Off from the list) to finally have my NDI texture in the UV editor updated live (at a few fps)

Also for people tracking Spout Sender discovery implementation into upstream Python-SpoutGL lib, see here

@jo-chemla
Copy link
Contributor Author

Hi again,
Regarding listing available spout senders, the Python-SpoutGL upstream maintainer just added API methods getSenderList() and a couple other APIs related to listing senders in 0.1.0 release as per this discussion

Add getSenderList(), getSenderInfo(), getActiveSender(), setActiveSender()

Hope this is useful!

@jlai
Copy link

jlai commented Aug 26, 2024

Python-SpoutGL started off as a Blender texture sharing experiment. I was trying to see if I could integrate Krita with Blender so you could live-preview textures being edited in Krita, although the performance was never great because of the CPU image copying mentioned above.

spout-krita-sync.mp4

Code (simplified, doesn't including polling / frame sync):

# Allocate buffers (do this whenever the source size changes)
io_bytes = BytesIO(bytes(repeat(0, width * height * 4)))
byte_buffer = self.io_bytes.getbuffer()
float_buffer = array.array('f', repeat(0, width * height * 4))

# Read image as bytes
receiver.receiveImage(byte_buffer, bgl.GL_BGRA, False, 0)

# Convert from bytes (0-255) to 32-bit float (0.0-1.0)
SpoutGL.copyToFloat32(byte_buffer, float_buffer)

# Copy float buffer to image
# Per https://blenderartists.org/t/faster-image-pixel-data-access/1161411/6 faster than image.pixels =
# when using a buffer
image.pixels.foreach_set(float_buffer)

# Make sure image gets marked dirty for rendering
image.update()
image.update_tag()

@jo-chemla
Copy link
Contributor Author

Thanks for the code sample and the demo, both looks good. Also thanks for the link stating that using image.pixels.foreach_set(float_buffer) is faster than setting image.pixels = - which the maybites plugin is using at the moment (for both NDI and syphon/spout).

Regarding GPU copy of textures on blender - or passing the texture gpu pointer - here is a dedicated thread in case it's useful that was opened by the maintainer, to track progress.

@maybites
Copy link
Owner

Thanks to you all - very interesting!

I don't have access to a windows machine at the moment - it will take until mid september when I can look into it again.

I am also open for pull requests, though I won't accept them until I can test them, too.

@maybites maybites reopened this Aug 26, 2024
@maybites
Copy link
Owner

message to my future me: checkout https://blendermarket.com/products/audvis

@jo-chemla
Copy link
Contributor Author

jo-chemla commented Aug 26, 2024

I've tried my luck implementing SpoutDirectory and SpoutClient but am not 100% done, copy-pasted below in case it's useful.

State of implementation:

  • Make sure to update SpoutGL to 0.1.0 in __init__.py and uninstall/reinstall python module
  • the directory correctly lists the available spout senders, with one minor quirks: the NDI directory overwrites the spout list in the plugin UI on my windows machine, so I've disabled it in blender-texture-sharing\fbs\FrameBufferDirectory.py, returning return SpoutDirectory(name) instead of return NDIDirectory(name)
  • the texture receive is not 100% finished, seems that the raw buffer copied to target_image.pixels.foreach_set(float_buffer) is full transparent - maybe my use of the receiveImage is incorrect. I'm fighting with the fact that the data returned by the Python-SpoutGL library is buffers rather than simpler numpy arrays, but the above code shared above by jlai is really useful!
  • At the moment, self.receiver.receiveImage(byte_buffer, bgl.GL_BGRA, False, 0) seem to return an empty buffer since SpoutGL.helpers.isBufferEmpty(byte_buffer) returns True, although I'm using the demo sender from Spout.
  • The dimensions are correctly parsed and updated - in apply_frame_to_image although it could listen to spoutSender being updated - when detoggling-retoggling the spout texture in the blender receive ui.
SpoutDirectory.py
from argparse import Namespace, ArgumentParser
from typing import Optional, Any, List

import SpoutGL

from ..FrameBufferDirectory import FrameBufferDirectory


class SpoutDirectory(FrameBufferDirectory):
    def __init__(self, name: str = "SpoutDirectory"):
        super().__init__(name)
        self.sources: Optional[List[str]] = []  # sources or servers

    def setup(self):
        self.receiver = SpoutGL.SpoutReceiver()
        self.update()

    def update(self):
        self._reset()
        # Add getSenderList(), getSenderInfo(), getActiveSender(), setActiveSender()
        # available from 0.1.0 of SpoutGL

        sender_names = self.receiver.getSenderList()
        sender_list = [self.receiver.getSenderInfo(n) for n in sender_names]
        # to access sender_info.width, sender_info.height

        print(sender_names, sender_list)
        self.sources = sender_names
        print("spout sources", self.sources)

        for i, s in enumerate(self.sources):
            self.directory.add((s, s, s, "WORLD_DATA", i))

        self.register()

    def has_servers(self):
        return not not self.sources

    def get_servers(self):
        return self.sources
SpoutClient.py
from argparse import Namespace, ArgumentParser
from typing import Optional, Any
from io import BytesIO
import SpoutGL

import bpy, bgl
import logging
import numpy as np
import array, sys
from itertools import repeat
from SpoutGL.enums import GL_RGBA

# from OpenGL import GL

from ..FrameBufferSharingClient import FrameBufferSharingClient


def make_empty_buffer(width, height, format):
    return BytesIO(
        bytes(repeat(0, width * height * SpoutGL.helpers.getBytesPerPixel(format)))
    )


class SpoutClient(FrameBufferSharingClient):
    # def __init__(self, name: str = "SyphonClient"):
    def __init__(self, name: str = "SpoutClient"):
        super().__init__(name)

        self.ctx: Optional[SpoutGL.SpoutClient] = None
        self.texture: Optional[Any] = None

    # def setup(self, server):
    # 	# self.ctx = SpoutGL.SpoutClient(server)
    # 	pass
    def setup(self, sources):
        # self.ctx = SpoutGL.SpoutClient(server)
        # additions iconem:
        self.receiver = SpoutGL.SpoutReceiver()
        pass

    def has_new_frame(self):
        # pass
        # self.receiver.isUpdated()
        return True

    def new_frame_image(self):
        # pass
        # self.receiver.isUpdated()
        return True

    def apply_frame_to_image(self, target_image: bpy.types.Image):
        # NDI for example
        # new_texture = np.copy(self.video_frame.data)
        # flat_texture = new_texture.flatten()
        # norm_texture = (flat_texture / 255.0).astype(float)
        # norm_texture = (self.video_frame.data.flatten() / 255.0).astype(float)
        # target_image.pixels = norm_texture

        # TODO get name dynamically
        SENDER_NAME = "Spout Demo Sender"
        # self.receiver.setReceiverName(SENDER_NAME)
        # receiver = self.receiver
        # receiver = SpoutGL.SpoutReceiver()

        with SpoutGL.SpoutReceiver() as receiver:
            self.receiver = receiver
            receiver.setReceiverName(SENDER_NAME)
            # width = receiver.getSenderWidth()
            # height = receiver.getSenderHeight()

            sender_info = receiver.getSenderInfo(SENDER_NAME)
            width, height = sender_info.width, sender_info.height

            self.io_bytes = BytesIO(bytes(repeat(0, width * height * 4)))
            byte_buffer = self.io_bytes.getbuffer()
            float_buffer = array.array("f", repeat(0, width * height * 4))
            # byte_buffer = array.array("B", repeat(0, width * height * 4))

            # Read image as bytes
            # receiveImage getFrame receiveTexture readMemoryBuffer
            gl_rgba = GL_RGBA  # bgl.GL_BGRA or GL_RGBA or GL.GL_RGBA
            result = receiver.receiveImage(byte_buffer, gl_rgba, False, 0)

            done = False
            buffer = None
            while not done:
                if receiver.receiveImage(
                    buffer.getbuffer() if buffer else None, gl_rgba, False, 0
                ):
                    if receiver.isUpdated():
                        incoming_width = receiver.getSenderWidth()
                        incoming_height = receiver.getSenderHeight()
                        buffer = make_empty_buffer(
                            incoming_width, incoming_height, gl_rgba
                        )
                        continue
                    received_pixels = buffer.getvalue()
                    # Not sure why first frame is empty
                    if SpoutGL.helpers.isBufferEmpty(buffer.getbuffer()):
                        continue
                    done = True
            print(
                "out of while loop, yay",
                received_pixels[:64],
                "SpoutGL.helpers.isBufferEmpty(buffer.getbuffer())",
                SpoutGL.helpers.isBufferEmpty(buffer.getbuffer()),
            )
            # if receiver.isUpdated():
            #     width = receiver.getSenderWidth()
            #     height = receiver.getSenderHeight()
            #     byte_buffer = array.array("B", repeat(0, width * height * 4))
            # 	# self.io_bytes = BytesIO(bytes(repeat(0, width * height * 4)))
            # 	# byte_buffer = self.io_bytes.getbuffer()
            #     float_buffer = array.array("f", repeat(0, width * height * 4))

            print(
                "\nbyte_buffer",
                byte_buffer[:30],
                "\ngetbuffer",
                self.io_bytes.getbuffer()[:30],
                "\ngetvalue",
                self.io_bytes.getvalue()[:30],
            )
            # Convert from bytes (0-255) to 32-bit float (0.0-1.0)
            print(
                "byte_buffer, result, SpoutGL.helpers.isBufferEmpty(byte_buffer)",
                byte_buffer[:64],
                result,
                SpoutGL.helpers.isBufferEmpty(byte_buffer),
            )
            print(
                "isBufferEmpty",
                SpoutGL.helpers.isBufferEmpty(byte_buffer),
            )
            print("trying to get bytes, look for Got Bytes message just below")
            if (
                byte_buffer
                and result
                and not SpoutGL.helpers.isBufferEmpty(byte_buffer)
            ):
                print("Got bytes", bytes(byte_buffer[0:64]), "...")

            print(
                "width X height",
                width,
                height,
                width * height,
                "floatBufferInfo",
                float_buffer.typecode,
                "== f, ",
                float_buffer.itemsize,  # itemsize
                "== 4, ",
                float_buffer.buffer_info()[1],
                byte_buffer.nbytes,
            )

            # SpoutGL.helpers.copyToFloat32(byte_buffer, float_buffer)
            SpoutGL.helpers.copyToFloat32(buffer.getbuffer(), float_buffer)

            # Copy float buffer to image
            # Per https://blenderartists.org/t/faster-image-pixel-data-access/1161411/6 faster than image.pixels =
            # when using a buffer
            if (
                target_image.generated_height != height
                or target_image.generated_width != width
            ):
                target_image.scale(width, height)

            target_image.pixels.foreach_set(float_buffer)
            # Make sure image gets marked dirty for rendering
            # target_image.update()
            # target_image.update_tag()
            #

            if True:  # shared_memory is None:
                memory_length = receiver.getMemoryBufferSize(SENDER_NAME)
                shared_memory = array.array("B", repeat(0, memory_length))

                result = receiver.readMemoryBuffer(
                    SENDER_NAME, shared_memory, len(shared_memory)
                )
                message = bytes(shared_memory).decode().split("\x00")[0]
                print("decoded", message)

    # Wait until the next frame is ready
    # Wait time is in milliseconds; note that 0 will return immediately
    # receiver.waitFrameSync(SENDER_NAME, 10000)
    # print("sync received")

    # Wait until the next frame is ready
    # Wait time is in milliseconds; note that 0 will return immediately
    # receiver.waitFrameSync(SENDER_NAME, 100)

    #
    # target_image.pixels = (self.video_frame.data.flatten() / 255.0).astype(float)

    def can_memory_buffer(self):
        return False

    def create_memory_buffer(self, texture_name: str, size: int):
        success = self.ctx.createMemoryBuffer(texture_name, size)

        if not success:
            logging.warning("Could not create memory buffer.")

        return

    def read_memory_buffer(self, texture_name: str, buffer):
        success = self.ctx.readMemoryBuffer(texture_name, buffer, len(buffer))

        if not success:
            logging.warning("Could not write memory buffer.")

        return

    def release(self):
        self.receiver.releaseReceiver()

@jo-chemla
Copy link
Contributor Author

Edit: 🚀 It now works! Just edited the above SpoutClient.py snippet to reflect changes.

What pointed me to the working solution is reusing other bits from from the original demo code repo, especially this comment Not sure why first frame is empty. So a while loop is needed to get first frames.

The code is really messy for now, and it is not live updating yet - only first frame gets captured, then I have to detoggle/retogggle the received texture to see it updated. This might have to do with has_new_frame or new_frame_image although I set both to True.

image

@jlai
Copy link

jlai commented Aug 26, 2024

I haven't tried the NDI/Syphon receiver so I'm not sure how this addon normally works, but it looks like the operator.py code for receivers only hooks into bpy.app.handlers.depsgraph_update_pre which only gets called when something in the scene changes. For example, you can add a print statement to write_frame_handler. If you add a cube to the scene and drag it around, you'll see it print when you move the cube, but not when the scene is unchanged.

Secondly, there's the two update calls in my example. image.update() load the changes to the float data array into blender's OpenGL/Metal/Vulkan/etc texture used for rendering.

image.update_tag() marks the image as dirty in the depgraph so that any 2d/3d views that use the image will be rerendered. This can be called from anywhere to trigger a re-render. E.g. you could set a timer operator that calls image.update_tag() once a second which would re-render any currently-visible scenes using the image (and probably do nothing if the image was not used).

I'm not sure how the Syphon/NDI implementation flush the images since I think those two calls would normally be necessary, unless something else in Blender happens to trigger the image to update.

In my experiment, I had a thread with the Spout client which would listen for new frames and push (instead of pull) updates to the image using update_tag() (note that any updates to blender data structures have to be done on the main thread, e.g. by scheduling a timer with bpy.app.timers.register). But note that the "push" approach was suited for my use case which was low-frame-rate previews where it's OK if frames get dropped (and had sender-side optimizations to not send frames if nothing changed).

I'm sort of curious what you're planning on using texture streaming for, especially if real-time is probably not feasible.

@maybites
Copy link
Owner

image.update_tag() marks the image as dirty in the depgraph so that any 2d/3d views that use the image will be rerendered. This can be called from anywhere to trigger a re-render. E.g. you could set a timer operator that calls image.update_tag() once a second which would re-render any currently-visible scenes using the image (and probably do nothing if the image was not used).

I'm not sure how the Syphon/NDI implementation flush the images since I think those two calls would normally be necessary, unless something else in Blender happens to trigger the image to update.

Its nice how this thread develops.

I didn't know about the image.update_tag(). So far I couldn't figure out how to force blender to updated the scene after a new frame (both syphon and NDI) has arrived. thanks.

@jo-chemla
Copy link
Contributor Author

Thanks both for the feedback! Indeed when I try to move something in the scene, then the Spout receiver texture updates in blender - low fps, probably because there is a lot of non-useful code + prints and bits that could be displaced out of the apply_frame_to_image rather than being executed on every frame.

Our use case was simplifying the workflow for our artists to design immersive exhibitions and fit our renders (3d scans, pointclouds or meshes, 3d tiles etc) to the exhibit space, and iterate faster. The low fps NDI/windows is already a pretty good step in the right direction, and I wanted to see if spout would make things better fps-wise - although it's true that the cpu image copy seem to be blocking, hence why a blender api to retrieve the gpu pointer to the image/tewture might alleviate that.

Image_066.mp4

@maybites
Copy link
Owner

hence why a blender api to retrieve the gpu pointer to the image/tewture might alleviate that.

totally agree.

I was already in contact with blender developers (see: https://devtalk.blender.org/t/adding-a-write-method-to-gpu-types-gputexture/33226/4) to gauge the waters. Its technically feasible, but I don't have the time to dive into this black box of a machine room.

@jo-chemla
Copy link
Contributor Author

jo-chemla commented Aug 27, 2024

Hi there, just removed most of the unused code, and framerate is way smoother - see attached.

Here is the final SpoutClient.py (folded). Do you want me to make a PR?
Note I have to move an object for the update to take place, and still had to disable return NDIDirectory(name) within blender-texture-sharing\fbs\FrameBufferDirectory.py create() so the spout available senders list/directory does not get overwritten by NDI list.

Best,

SpoutDirectory.py
from typing import Optional, Any

import SpoutGL
from io import BytesIO
import array
from itertools import repeat

import bpy
import logging

from ..FrameBufferSharingClient import FrameBufferSharingClient


def make_empty_buffer(width, height, format):
    return BytesIO(
        bytes(repeat(0, width * height * SpoutGL.helpers.getBytesPerPixel(format)))
    )


GL_RGBA = SpoutGL.enums.GL_RGBA


class SpoutClient(FrameBufferSharingClient):
    def __init__(self, name: str = "SpoutClient"):
        super().__init__(name)
        self.sender_name = name
        self.receiver: Optional[SpoutGL.SpoutReceiver] = None

    def setup(self, sources):
        print("SpoutClient setup sources", sources)
        self.receiver = SpoutGL.SpoutReceiver()
        self.byte_buffer = None
        self.float_buffer = None

    def has_new_frame(self):
        # self.receiver.isUpdated()
        return True

    def new_frame_image(self):
        # self.receiver.isUpdated()
        return True

    def apply_frame_to_image(self, target_image: bpy.types.Image):
        # with SpoutGL.SpoutReceiver() as receiver:
        # self.receiver = receiver
        receiver = self.receiver
        receiver.setReceiverName(self.sender_name)

        sender_info = receiver.getSenderInfo(self.sender_name)
        width, height = sender_info.width, sender_info.height
        if (
            target_image.generated_height != height
            or target_image.generated_width != width
        ):
            # Rescale blender image
            target_image.scale(width, height)
            # Update buffers dims
            self.float_buffer = array.array("f", repeat(0, width * height * 4))
            self.byte_buffer = make_empty_buffer(width, height, GL_RGBA)

        done = False
        while not done:
            result = receiver.receiveImage(
                self.byte_buffer.getbuffer() if self.byte_buffer else None,
                GL_RGBA,
                False,
                0,
            )
            if result:
                if receiver.isUpdated():
                    # incoming_width = receiver.getSenderWidth()
                    # incoming_height = receiver.getSenderHeight()
                    # buffer = make_empty_buffer(incoming_width, incoming_height, GL_RGBA)
                    # # additions
                    # self.float_buffer = array.array(
                    #     "f", repeat(0, incoming_width * incoming_height * 4)
                    # )
                    # target_image.scale(incoming_width, incoming_height)
                    continue
                # received_pixels = buffer.getvalue()
                # Not sure why first frame is empty
                if SpoutGL.helpers.isBufferEmpty(self.byte_buffer.getbuffer()):
                    continue
                done = True

        # print("trying to get bytes, look for Got Bytes message just below",
        #     "isBufferEmpty",
        #     SpoutGL.helpers.isBufferEmpty(buffer.getbuffer()),
        # )
        # if (
        #     buffer and result and not SpoutGL.helpers.isBufferEmpty(buffer)
        # ):
        #     print("Got bytes", bytes(buffer[0:64]), "...")

        # Copy float buffer to image
        SpoutGL.helpers.copyToFloat32(self.byte_buffer.getbuffer(), self.float_buffer)
        target_image.pixels.foreach_set(self.float_buffer)
        # Per https://blenderartists.org/t/faster-image-pixel-data-access/1161411/6 faster than image.pixels =
        # when using a buffer
        # other example: target_image.pixels = (self.video_frame.data.flatten() / 255.0).astype(float)
        #
        # Make sure image gets marked dirty for rendering
        target_image.update()
        target_image.update_tag()

    # Wait until the next frame is ready
    # Wait time is in milliseconds; note that 0 will return immediately
    # receiver.waitFrameSync(self.sender_name, 10000)
    # print("sync received")

    def can_memory_buffer(self):
        return False

    def create_memory_buffer(self, texture_name: str, size: int):
        success = self.ctx.createMemoryBuffer(texture_name, size)

        if not success:
            logging.warning("Could not create memory buffer.")

        return

    def read_memory_buffer(self, texture_name: str, buffer):
        success = self.ctx.readMemoryBuffer(texture_name, buffer, len(buffer))

        if not success:
            logging.warning("Could not write memory buffer.")

        return

    def release(self):
        self.receiver.releaseReceiver()
Image_068.mp4

@maybites
Copy link
Owner

have you tried the image.update_tag() option?

@jo-chemla
Copy link
Contributor Author

Here is the PR, which now takes the sender's name into account dynamically: #39
I've tried both image.update_tag() & image.update() but it seems that apply_frame_to_image is not called multiple times unless I move the object in the scene as jlai mentioned, probably because

it looks like the operator.py code for receivers only hooks into bpy.app.handlers.depsgraph_update_pre which only gets called when something in the scene changes

@maybites
Copy link
Owner

Your PR looked good and I merged it with my master.

@jo-chemla
Copy link
Contributor Author

Thanks, great to hear! Glad I could help work out the spout client and listing for blender-texture-sharing!

@maybites
Copy link
Owner

I just pushed a fix for the overwriting of the NDI list in the spout receiver selection menu. At least I hope. I could only test it on MacOS.

@jo-chemla
Copy link
Contributor Author

The fix seem to work correctly to list either Spout or NDI on windows, thanks!
Same thing for both Spout and NDI, I have to move the cube to have the test pattern appear and update inside the texture. Probably image.update_tag() & image.update() can be added at some generic place to avoid the need to do these scene updates.

@jlai
Copy link

jlai commented Aug 28, 2024

Simplest option for automatic refresh would be to add a timer targeting a user-configurable FPS and call write_frame_handler and image.update_tag() from the callback. The timer callback could return (1/fps) in the simplest implementation, or ideally also take into account the time passed since the last update time so that the interval is consistent.

Spout has an isFrameNew() that you can try using for the has_new_frame implementation to avoid drawing when there isn't a new frame ready, although I don't know how well it works (I remember it acting a bit wonky, but I haven't touched it in a few years).

At higher frame rates, it may be better to use a frame listener (push) for registering a callback with the syphon/spout client to draw when a new frame is received (otherwise if the timing doesn't line up, you might miss frames) but given the performance limitations (especially with larger textures) I don't think that'd be an issue.

@jo-chemla
Copy link
Contributor Author

Understood, thanks for the pointers. This issue can probably be closed now, since spout client and directory works both great!

@maybites
Copy link
Owner

I just pushed a fix for the update issue. It uses now a timer instead and has also a refresh rate setting.

I keep the issue open for the time beeing.. I am not done yet :-)

@jo-chemla
Copy link
Contributor Author

Thanks, let's keep this open then, sure!
Nice fix for the timer, texture is now correctly updating without requiring user-interaction/scene updates.

Do you think it would make sense to ping back blender dev team on your original forum post, in order to know whether sharing the texture GPU pointer can be implemented for better framerate?

@maybites
Copy link
Owner

Do you think it would make sense to ping back blender dev team on your original forum post, in order to know whether sharing the texture GPU pointer can be implemented for better framerate?

They already confirmed the technical feasability. But I very much doubt they will do it. Not high enough on their priority. This is very much a fringe usage.

But you can try.

@jlai
Copy link

jlai commented Aug 28, 2024

I think for textures the problem is that they can be used in a bunch of different contexts and environments including the UV editor, Cycles (which can be CPU rendered), Metal/Vulkan, external rendering engines, and other addons. So OpenGL is an implementation detail that may not even be available in some cases.

And it sounds like texture/image handling is even more complex in Vulkan so it would probably be harder to abstract over compared to something like GPUOffscreen which serves a more narrow purpose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants