[Tools] SLR - Offline JP to EN Translation for RPG Maker VX, VX Ace, MV, MZ, and Pictures

I've released SLR Translator v2.0.

Besides a bunch of other changes, Introducing the SEP offline picture translation system.

It will load pictures into a SLR project as if they are text files, letting you translate the text however you want. When exporting it will automatically mask the original text and draw the new one in its place making absolutely sure the text never covers more general area than the original text.
It will also attempt to logically group the text so that font size stays consistent.
You can also override the automated system if you want to specify a different font, font size, or font color.
During tests using Qwen2.5-VL-32B-Instruct-abliterated as assistant LLM and translating with DeepSeek DSLR, SEP consistently outperformed the free picture translation services of both Google and Yandex. (For horizontal Japanese.)

Current Limitations:
Only works on PNG without encryption.
Only works on horizontal text. (Slightly tilted text technically works, but it will draw the new text horizontal and probably in a very small font size.)
 
  • Like
Reactions: account
As of v2.004 SEP now supports JPG and JPEG files, too. In case someone cares about that for some reason.
 
  • Like
Reactions: Entai2965
I've released v2.005.
New option to change how the calculation for word wrapping treats text commands.
Revamped the drawing module of SEP.
It's now much faster. During tests it was exporting pictures in under 1 second.
It now supports angled/rotated text. (But still not vertical)
Blur can no longer affect new text, because all blur boxes are applied before any text is applied.
The blur masking can now be turned off completely in the options menu.
 
I've released 2.007.
I didn't make a specific post for 2.006, since nobody really cares about SEP besides me.
But both combined I made error handling A LOT better for SEP and DSLR.
DSLR will now also log all requests and responses to the console even if it did not detect a problem.
That means whenever you see a terrible translation you will now be able to check what the actual request was, in case it wasn't actually the LLM screwing up, but DSLR making a bad request, that needs to be prevented.
(A lot happens to a cell before any of it is actually sent to the LLM to preserve scripts and stuff like that, it's very rare that the entire cell will actually be fed as is.)
 
Just some fun numbers about DSLR.

Let's say we want to translate a game with 4 million characters of text that needs translation (Including duplicates, 16 SLR batches), which would be a small to medium sized game.
And we use deepseek-chat-v3-0324 with the second fastest provider on OpenRouter Baseten (The technically fastest SambaNova is basically a scam), because we want the translation to finish as fast as possible. (There are significantly cheaper options.)
That would be running costs of $0.77/mt input and $0.77/mt output, + 5.5% tax.
A token is on average 4 characters, and we will not be using cache because it makes AI translations worse.

That means the absolute minimum amount to spend on the translation would be $1,65, assuming an optimistic 110% input / 100% output ratio, and if you assume that you do not ever have to redo any part of it due to bugs, and not a single AI response ever needs correction.

If we would have used GPT4.1 instead ($2/mt input $8/mt output + 5.5% tax).
Then (everything else being the same) that would be an absolute minimum of $10,76.

Or differently put, using GPT would be a 552% cost increase compared to using the second fastest DeepSeek provider.

If you feel like the translation is also 552% better go for it, but please note that dazedanon's translations are not in any way representative of the normal quality of GPT4.1, his stuff is not raw GPT output by any means, people don't give him enough credit.

Edit: I purposely picked an expensive DeepSeek provider to mach/be above the speed of GPT, but as a result it makes it look like I'm just trying to make GPT look bad by being so optimistic about input ratio.
Yes using that provider the gap would be quite a bit smaller if you go with a 200% 100% ratio instead, which might actually be more realistic, but if you don't go for the absolute fastest you can get almost the same speed with GMICloud at $0.28 $0.88, and then GPT really looks like overpriced shit regardless how pessimistic you are with the input ratio.
 
Last edited:
  • Like
Reactions: Entai2965
I've released v2.008 v2.009
Edit: Changed how the cache file names are generated in v2.009 so they aren't this stupid long and using something like ":free" no longer makes the cache unique. If you used any version before v2.009 to make persistent cache files you will have to rename them.
The translator cache is located at www>addons>SLRtrans.
Your project cache is at www>php>cache btw.

Revamped the DSLR Cache System, it's now actually usable.

It will now create individual cache files based on LLM model. Meaning it can no longer put qwen translations into a deepseek cache, etc.
Added new option "Clean Persistent Cache".
It makes it so translations will be cached to separate "Clean" files when no Context Prompt was given.
The Persistant Cache option disables Clean Persistent Cache and still always caches everything.
Added new option Batch Cache.
Batch Cache only uses cached translations if every line in the current batch was cached in the session or in active cache files. "Clean" cache files will only be used if no Context Prompt was given.
Renamed old Cache option to Line Cache. Line Cache overwrites Batch Cache.

The Batch Cache option now means that it is incredibly unlikely that the cache will make the translation worse, but you still do not have to pay double for long repeated sections. (Which are really common in VX and VX Ace games)
And there's also no real downside to using persistent clean cache because it would have given the same response anyway.

But if you are someone who actually changes the other prompts I recommend keeping all persistent cache off.
 
Last edited:
There's a bug in the current version that breaks \C[] \N[] \V[] commands at the end of cells sometimes.
Here's a quick fix.
Put the DSLREngine.js into the www>addons>SLRtrans>lib>TranslationEngine folder and overwrite.

I have problems with my internet right now, I'll make a proper update once that is fixed again.
 

Attachments

  • DSLREngine.7z
    19.5 KB · Views: 0
  • Love
Reactions: Kleinod
Partially fixed my internet problems. At least enough to upload stuff.

I've released v2.010
Fixed DSLR breaking commands at the end of cells.
Changed some post processing for Skills, States and plugins.
Main change is that it will now always put spaces at the start of combat skill snippets, so they are actually readable even if the LLM screws up.
 
I've released v2.012.
Most notable change:
Added 4 more DSLR options.
Minimum Sent Cells, Maximum Sent Cells, Minimum Sent Lines, Maximum Sent Lines.
They allow you to limit/require a set amount in a batch regardless of the set character limit.

The idea is that if you know that your LLM struggles to preserve all dividers after a certain amount of lines you can prevent DSLR from sending that much, even if the individual character count is super low.
Or that you can set a minimum amount of cells to be sent so that the LLM will always have enough context even if the individual cells have a lot of characters and you would run out.

It's still a pretty wonky, the Line ones basically don't do anything, since the limiting factor should be the batch size not the request size. But it's certainly better than just having that character count setting.

Edit: I had a brainfart. While it is enforcing a minimum amount of cells sent to DSLR, they get filtered by the wrapper afterwards. Which means a minimum amount of 40 cells can still result in a batch with only 23 lines. So it's kinda useless right now.

Edit2:
v2.013 is going to be a huge improvement. I just had one of those "I should have done that ages ago" moments when it comes to my batch translation system in general. Please wait a bit if you are planning a project using DSLR.
 
Last edited:
I've released v2.013, fixing the mess that was v2.012.
If you are using DSLR please update, it's now better in general.
Faster, less token usage, more accurate cache, same translation quality.

On default the SLR Batchtranslator now sends a batch with a maximum of 10000 characters to the DSLR engine.
The DSLR engine then divides that batch into requests based on the Request Character Limit (default 1200 characters), but always at least whatever is set as Minimum Sent Lines (default 40 lines) and never more than what is set as the Maximum Sent Lines (default 200 lines).
Should a batch cause the next batch to have less than the Minimum Sent Lines, then it will merge with that batch to rather create a slightly larger batch than allow one without sufficient context. (They can still be split once if the LLM fails 3 attempts.)
Because the batches handled by the Batchtranslator are so large now and only exact matches use the cache, it's now almost impossible to have a cache hit unless it is exactly something you've translated before, ensuring it will not decrease translation quality.
It also means that the token summary of a batch is a lot more useful.
Loosing a large batch because of a single error is frustrating, but for the sake of translation quality Line Cache shouldn't really be used anyway.

All of this stuff can be customized in the options menu.

If you are using an endpoint that allows you concurrent requests, The UI might get pretty messy now. It "should" still work if you enable it in the options, but you might want to decrease batch size to less than 10k though.
 

Users who are viewing this thread

Latest profile posts

Can't believe roserot paywalled what is essentially a demo release lmao i thought it was the full game
Ninako Ayase wrote on Ryzen111's profile.
eivallavie wrote on Ryzen111's profile.
please reupload!
https://www.anime-sharing.com/threads/101126-miel-新婚妻紫音のAVデビュー~ハネムーン中に夫以外と生出し乱交寝取られ~-ダウンロード版.1286443/