-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access violation Exception on Visual Studio Debugger #1841
Comments
the test was made on the classic code 👍🏼 #include <unicorn/unicorn.h>
// code to be emulated
#define X86_CODE32 "\x41\x4a" // INC ecx; DEC edx
// memory address where emulation starts
#define ADDRESS 0x1000000
int main(int argc, char** argv, char** envp)
{
uc_engine* uc;
uc_err err;
int r_ecx = 0x1234; // ECX register
int r_edx = 0x7890; // EDX register
printf("Emulate i386 code\n");
// Initialize emulator in X86-32bit mode
err = uc_open(UC_ARCH_X86, UC_MODE_32, &uc);
if (err != UC_ERR_OK)
{
printf("Failed on uc_open() with error returned: %u\n", err);
return -1;
}
// map 2MB memory for this emulation
uc_mem_map(uc, ADDRESS, 2 * 1024 * 1024, UC_PROT_ALL);
// write machine code to be emulated to memory
if (uc_mem_write(uc, ADDRESS, X86_CODE32, sizeof(X86_CODE32) - 1))
{
printf("Failed to write emulation code to memory, quit!\n");
return -1;
}
// initialize machine registers
uc_reg_write(uc, UC_X86_REG_ECX, &r_ecx);
uc_reg_write(uc, UC_X86_REG_EDX, &r_edx);
// emulate code in infinite time & unlimited instructions
err = uc_emu_start(uc, ADDRESS, ADDRESS + sizeof(X86_CODE32) - 1, 0, 0);
if (err)
{
printf("Failed on uc_emu_start() with error returned %u: %s\n", err, uc_strerror(err));
}
// now print out some registers
printf("Emulation done. Below is the CPU context\n");
uc_reg_read(uc, UC_X86_REG_ECX, &r_ecx);
uc_reg_read(uc, UC_X86_REG_EDX, &r_edx);
printf(">>> ECX = 0x%x\n", r_ecx);
printf(">>> EDX = 0x%x\n", r_edx);
uc_close(uc);
return 0;
} |
Are you using the code from dev branch? |
Yes. |
just tested, it doesn't occur on the HEAD master branch. |
It should be introduced by recent windows demand paging feature but I can’t reproduce it on my side. Do you have any special setup?
…________________________________
From: HXMCPP ***@***.***>
Sent: Friday, June 16, 2023 6:25:37 PM
To: unicorn-engine/unicorn ***@***.***>
Cc: lazymio ***@***.***>; Comment ***@***.***>
Subject: Re: [unicorn-engine/unicorn] Access violation Exception on Visual Studio Debugger (Issue #1841)
just tested, it doesn't occur on the HEAD master branch.
―
Reply to this email directly, view it on GitHub<#1841 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AHJULO2HVSTNIR4FRNJP2MTXLSCIDANCNFSM6AAAAAAZGOKU7M>.
You are receiving this because you commented.Message ID: ***@***.***>
|
i don't have any special build. i'm on windows 10.0.19045.2965 using :
|
I believe that this happens because a debugger is attached and in that case the debugger will be the first to see the exception before its passed to application, this is probably due to the hacky code introduced recently for the TCG buffer. I think you should be able to pass the exception to the application and it should continue running. |
Oh wait, ZehMatt is correct as now we only commit pages when it’s accessed in the first time which causes exceptions. However, the exceptions are handled safely and you can configure your debugger to ignore this type of exceptions and safely continue execution. |
Is it still necessary to do now given the user can specify the buffer size? I also believe that the way it currently works is not the most elegant solution to expand the buffer on demand, it would be better to track to how much is used and expand when it runs out of room. Another thing to think about is to simply reduce the default buffer size from 1GiB to something more sensible, we are not running a fully fledged OS with unicorn. |
I think yes because it’s the most flexible way.
I don’t get your point because currently we are actually expand when it runs out of room, no? Note current behavior also keeps the same with *nix systems.
That doesn’t make too much sense because we only allocate virtual space. The buffer size limits only affects how many unicorn instances can be created at the same time. |
There is quite a huge difference in how this is done, just think of it like a vector that has capacity and size right now this relies on the OS to tell us if we are accessing pages that aren't committed yet where this could be done without any exceptions or the support of the OS, I don't understand how you can think this is a good solution. Your solution also has no guarantees that we actually get the page, when its close to running out of memory the page may be given to something/someone else in the process space. |
I got your idea but the root cause is that all qemu code assumes that tcg buffer is fixed and continuous which won’t be resized. That is to say, almost all of the code doesn’t take running out of buffer into account, it just aborts in this case. In fact, my previous solution which utilizes tcg multiple regions mechanism originally designed for mtcg achieves what you said but I often get weird crashes and it’s really hard to maintain. Lastly, I don’t say it’s a good solution but the key point is that this achieves the same behavior across all platforms, i.e. it is basically mmap, no? Or you also think mmap is bad? However I would like to note that some implementation of std::vector is also backend by mmap when it needs a large region of memory. I understand you may argue we could get a better performance but so far we commit 4MB each time which shouldn’t have big performance impact though I don’t benchmark strictly. |
Well my primary concern is mostly the way its handled which is now done with exceptions, how does qemu cope with the fact that there is only 1 GiB available will it just overwrite the oldest block like an LRU or how is it different to unicorn in this case? |
No. If the allocation fails, it just aborts anyway. But if I remember it correctly, qemu has mechanism to decide the available memory for tcg buffer, which should be max(256MB, min((max physical memory)/4, 1GB)). Unicorn at this moment will probably abort if running out of memory because only a few cases are handled properly. |
I see, I still think its better to not rely on exceptions for this, the mechanism can stay nearly the same we would just need to track to how much is currently in use and then commit a new page when it runs out of capacity, very much like it is currently handled minus the exceptions, otherwise this seems fine to me. |
Unfortunately, it’s hard to achieve this without a ground refactoring. For example, the tcg_out_opc in this issue just writes a raw pointer to buffer, where we don’t have a chance to intercept and increase buffer without exceptions. |
Closed due to not a bug. (and doc-ed) |
the call of uc_mem_map trigger an exception on a debugging session :
the lib was compiled in Release with debug infos on visual studio 2022 SDK 10.0.22621.0
it doesn't crash running standalone.
The text was updated successfully, but these errors were encountered: