Reduced the restrictions on the blender exporter, now designing terrains is much more practical.
I was finding blender a little on the slow side to work with on larger resolutions, so I started experimenting with my own in-urho terrain editor that mimics blenders modifier stack, with optimisations directed at working with terrains. At first I implemented it in angelscript, now I’ve moved the main parts to C++.
So far I have a basic fractal terrain generator, a ‘hook’ displace (ala blenders hook to vertex), a road excavator, smoothing and also toyed with using physics to erode terrain.
Prototype of Urho’s Navigation mesh and Crowd manager system in IOGRAM:
Dang, I feel less original now…
Have you been working on something? Feel free to share
I should share my terrain system at some point, I think it’s quite useful for procedurally generating interesting terrains.
I’ve been working in 2d land recently. To that end, I wanted to dabble in generating colour palettes so that any art I attempt to produce has some hope of not looking terrible. That turned into quite an expedition:
I’m working on a survival fps with old-school shooter elements. Very early in development though. I say I’m not original because I’m going for an artsyle similar to your other thing.
Super awesome to see this in action!
@rku, I’ve lightened my attitude a bit and the dump @ https://github.com/JSandusky/Blocks is MIT’d, lift whatever you want from it if there’s anything useful to you there … though it’s like 50% robot generated so yeah … whatever you can get from it great.
I’ll update the dump in a couple of days, after I’ve trimmed out the stuff I care about and that won’t work in anyone’s repos but my own.
I’ve abandoned the approach I took there and now regard docking as a bullshit excuse to let poor UI slide under the guise of customization. Current reincarnation is like Akeytsu mashed with zBrush … though I still stick with WPF for the stuff I sell.
EDIT: And another video…
NoesisGUI implementation (drawn overtop of ImGui here).
Has not been fun, quite painful actually with interfaces for this and interfaces for that and whole lot of “nope, you really have no idea WTF anything is … so query and pray it’s there”.
Still a ton to do with it before it’s plausibly usable - even then, I don’t see how it’s mappable to any generic use with MVVM/extension registration and the like plus the extra steps in making Blend and the runtime coherent.
Hi Sinoid, do you have your project on github? I tried integrate Noesis GUI too in to Urho, but I am not successful. How you render gui? You use NoesisRenderDevice or you create cusom?
I’ve only implemented for DX11, so I don’t know what I’ll face should I try for OpenGL (which I never ever will).
The biggest thing was GPU-state. Noesis’ prewritten renderers do not save or restore state. To do that I lifted the state save/restore from the DearImGui implementations. I call a SaveGPUState before Noesis does anything and I call RestoreGPUState function after Noesis does offscreen rendering and primary rendering.
That’s in a loop because right now I support multiple views, basically it’s:
Save GPU State foreach view view -> UpdateSize if view needs offscreen render? view -> Render Offscreen Restore GPU State view -> Render Restore GPU State end foreach
Noesis Renderer: I used the example
D3D11RenderDevice from the NoesisApp sources. I just changed it to not have any ID3D11Device/ID3D11DeviceContext ownership - it receives them from Urho3D - all functions for creating/disposing those is removed.
Noesis Interfaces: pretty crude, I construct them with an Urho3D context so they can access the ResourceCache. Textures always go through the wrapper API as in the Noesis D3D11 example (wrap the native handle).
Urho3D::Graphics: I had to modify
Graphics::ResetRenderTargets to force a rebind to defaults. The current implementation is soft, I forced
impl_->renderTargetsDirty_ = true; inside of the function to make it hard.
Input Handling: still a work in progress, just mouse at the moment - more concerned with data/view management right now.
View management: really crude,
XAMLGui is a Subsystem (like ResourceCache) that is used to construct
XAMLView instances. Those views have a few helpers for storyboards, setting the root datacontext, etc. Otherwise it’s all still early and raw.
I’m doubtful that it’s really possible to do outside of an extremely raw setting, it’s GUI though - there is no such thing as generic GUI. Angelscript bindings do look to be hell though.
Building: Noesis has a completely psychotic SDK folder layout, I use Premake so my build is not relevant to anyone using CMake, however. The stuff I use from the NoesisApp/example sources is straight copied over rather than referenced because the paths are stupid nuts (
Progress. C# fields for subclasses of
Serializable are automatically exposed to editor and included in serialization.
Cool rain and dynamic light effects No special shaders just Diffuse texures and the character uses a DiffSpecular with a slight specular power.
EDIT : Except the Color correction / desaturation fullscreen shader i’m using
@Dave82 nice as usual
Starting to put that CivetWeb dependency to work for something other than websockets, embedded info server:
Not planning to do anything crazy with it, really just about being able to serve Image/text-blobs up for diagnostics on demand with some basic querying, attribute editing at the absolute maximum.
I’m absolutely horrid at web development anyways and CivetWeb runs connections in threads which locks a lot off (angelscript debugging only works because it freezes things), so I can’t go bonkers.
rku Nice, your editor looks very good. can it be launched as an in-game editor ? do you have a base version that works with the original urho3D ? I’d like to integrate it independtly of your engine fork.