25-30 FPS is in no way acceptable to me, which is what killed the game (newer versions anyway) for me when 1.8 came out, and even in 1.7 (as well as all newer versions, until I got a new computer last year) I had some weird issue where it hitched every 10th frame (always every 10th frame regardless of any settings, even the FPS limit and even running external programs to cause lag), causing the effective FPS to be that much lower (here is an old screenshot with Optifine's lagometer enabled; it may say 58 FPS but it looked like 1/10th of that).
Also, most games are in no way comparable to Minecraft - does Overwatch do anything underneath all those fancy textures? Does it have to render millions of blocks (or run culling code so as to not crash your computer by trying to render every single face)? No? For some idea of just what Minecraft has to do in order to render just 10 chunk render distance, the minimum required due to a bug with mob (de)spawning, there are 441 (21x21) chunks loaded, each with as many as 65536 blocks (16128 for a world height of 63; e.g. open ocean) and 393216 (96768) faces. If the game tried to render every face of every block the GPU would have to render as many as 173 (42.7) million faces - even with 16x16 textures that's 44.4 billion texels (a texel being a pixel of the texture, each having 256 pixels/texels) - a GeForce GTX 1080 (a very high-end GPU) can process 257 billion per second so that's a mere 5.8 FPS.
This can be reduced to about 1292 (704) per chunk by removing all faces which are occluded by another block, leaving only 256 on the tops and bottom of each chunk plus the sides of the outermost chunks, for a reduction of up to 300-fold, but that requires checking every single block to see if it is occluded. Even a chunk with terrain, trees, caves, structures, etc, is going to have most of the block faces occluded and the game performs an additional culling step to cull 16x16x16 chunk sections which are not visible but that requires even more checks to determine this - and all of that extra checking is dumped onto the CPU, partly explaining why the game is so reliant on a good CPU.
Then add in all the block updates, entity movement, and so on which occur constantly. Bedrock actually only updates chunks within 4 chunks of the player (81 chunks) while Java updates all chunks within render distance (i.e. 441 chunks for 10 chunk render distance*. Entities are only active if a 5x5 chunk area around them is loaded so they are processed within 289 chunks):
But Java actually has things going on in all of those chunks, Bedrock just SHOWs you the chunks, if you are more than 4 chunks away everything deactivates.
*In 1.3.1-1.6.4 the internal server used a fixed view distance of 10 but a chunk update radius of only 7 so only about half the loaded chunks (225/441) were being updated, which may explain why some people saw worse performance after 1.8(?) made the chunk update radius equal to the view (and since 1.7.4 render) distance. Also, I've noticed that in newer versions the area of chunks the player generates is not a perfect square but often has random single chunks along the sides, suggesting that blocks that update next to unloaded chunks force them to load - the chunk update radius should be at least 1 less than the view distance to avoid this (similar to how the entity update radius is 2 less).
Also, it is a bit misleading to say that Java Minecraft does not have its own game engine - all games run on a engine of some sort, whether it is based on a customized version of a "standard" engine or a homemade one (as in the case of Java; Bedrock probably uses its own as well). However, Java does depend on 3rd party libraries (LWJGL) to provide an interface to the GPU/OpenGL and this can be where some the issues lie; among other things, LWJGL 2 is ancient and has long been superseded by LWJGL 3; unfortunately they are so different (not even backwards compatible) that a huge rewrite is needed in order to convert over instead of simply updating the library).