Что такое сейф поинт в читах
Safe Point is just a feature that makes ur b1g aimbot go for points on the player that intersect both desync and real.
An example is if someone is freestanding, their desync will extremely close to their real (nature of desync) with their heads being
basically behind or infront of each other.
Safe point will recognize this, and be like. We can def shoot there, as the bullet will still hit thee real player.
You really shouldn’t be missing a shot due to bad animations with safe point, but I’ve seen some hacks still fail to hit.
Just have 3 matrixes, 2 being the possible desync resolve directions, and the 3rd one being the original.
Scan for a hitbox to shoot, and then re-trace and see if it intersects all 3 hitboxes.
(You can use this tracing method for also triggerbot bt, and missed shot calculations)
Another and lazier method which is LESS reliable is using angToLocal:
You can check if the person is leaning to the left or right on your side
This usually means that desync head and real head are extremely close (freestanding)
If they aren’t, or the lean doesn’t seem good, just go for the pelvis, which has the smallest fake body from my experience.
But I would recommend with trace-raying hitboxes (there are many SDKs with it)
— Usually stored in gamer software’s math files (math.cpp)
— Usually labelled like, TraceHitbox, and requires a few other math. like IntersectInfinityRayWithSphere or smth like that
That’s my take on it at least, and if anyone would like to correct me, please do :3
Last Achievements
Safe Point is just a feature that makes ur b1g aimbot go for points on the player that intersect both desync and real.
An example is if someone is freestanding, their desync will extremely close to their real (nature of desync) with their heads being
basically behind or infront of each other.
Safe point will recognize this, and be like. We can def shoot there, as the bullet will still hit thee real player.
You really shouldn’t be missing a shot due to bad animations with safe point, but I’ve seen some hacks still fail to hit.
Just have 3 matrixes, 2 being the possible desync resolve directions, and the 3rd one being the original.
Scan for a hitbox to shoot, and then re-trace and see if it intersects all 3 hitboxes.
(You can use this tracing method for also triggerbot bt, and missed shot calculations)
Another and lazier method which is LESS reliable is using angToLocal:
You can check if the person is leaning to the left or right on your side
This usually means that desync head and real head are extremely close (freestanding)
If they aren’t, or the lean doesn’t seem good, just go for the pelvis, which has the smallest fake body from my experience.
But I would recommend with trace-raying hitboxes (there are many SDKs with it)
That’s my take on it at least, and if anyone would like to correct me, please do :3
to save urself some fps, its enough to grab the points from matrix a and then trace them onto the opposite direction. if they hit (could add a check if its the same hitbox if needed) u know that they overlap.
consider ur points are within the hitboxes and not outside, if your points are literally on the edge or further u might want to trace both matrix i guess

eg. take points with blue matrix, set bonematrix to green and trace the points u got from blue
Alexey Ragozin
Nice article Alexey. Do you know what part of the JDK code really does this — "JVM unmaps page with that address provoking page fault on application thread"?
I think the Azul JVM also used to do this to quickly trap moved/GC'ed addresses.

Using page faults for read barrier ("quickly trap moved/GC'ed addresses") would be prohibitively expensive. Azul JVM does not use page faults for read barrier, though it is using this technique for defragmenting physical memory associated with large object.
Azul is using custom page mapping to facilitate software read barrier, but this technique does not relay on page faults.
Or at least it was that way last time I was working with Azul.
Thank you for an eye-opening article on safepoints. Do you know if there is any way to identify the reason for a huge pause of hundreds of seconds that does not appear to be related to GC activity?
Total time for which application threads were stopped: 0.0020916 seconds
Total time for which application threads were stopped: 0.0677614 seconds
Total time for which application threads were stopped: 0.0016208 seconds
Total time for which application threads were stopped: 195.2580105 seconds
Total time for which application threads were stopped: 0.0313111 seconds
Total time for which application threads were stopped: 0.0005465 seconds
Total time for which application threads were stopped: 0.0006269 seconds

First enable safe point logging -XX:+PrintSafepointStatistics
-XX:PrintSafepointStatisticsCount=1
This will allow you to understand whenever safepoint is culprit.
Last problem with slow safepoints, was bug in JIT combined with weird application code.
Trying latest JVM is another step.
We switched to 1.6.0_43, at the time that happened we had 1.6.0_31. One of the reasons was bug 2221291. Can you tell me the bug ID for the problem related to JIT?

No, I didn't track exact bug. Slight change of code has solved issue in my case.
Yep, 2221291 is a nasty one.
Very informative article, thank you. We have seen due to IO overload inside Linux. When this happens, GC log entries show use_time
at least 1 second. We are able to recreate this type of stalls in the lab too. It turns out that deferred writes to append a file can be blocked for a long time when the write is blocked by journal commit. Or when dirty_ratio is exceeded. We straced the Java process and could correlate some but not all of the stalls to GC threads when they write to the gc.log file. If GC threads do not have park the Java threads running in kernel mode, we are stumped about what else could have caused the stall (where user_time
0). Any other data/traces you would recommend to help us understand the issue better? Many thanks.

Have you enabled -XX:+PrintSafepointStatistics ?
Sometimes I've seen JVM spending too much time trying to enter to safe point. Safe point initiation time is accounted to GC pause time.
Another suspects are native threads taking GC lock via JNI (+XX:+PrintJNIGCStals may help to identify if this is a case).
Hi Alexey, thanks for the feedback. We did not always turn on -XX:+PrintSafepointStatistics because the output is so obscure. Our test program that recreates the stall just uses log4j and does not make calls to JNI but it's great to know about this option PrintJNIGCStalls.
Hi Alexey, your articles are very informative. Recently i have faced a situation where GC is taking mamooth time and not sure what can be the reason. Here is the output of jstat -gc command
S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT
77440.0 73088.0 22896.4 0.0 1946624.0 222690.4 4194304.0 3638965.1 262144.0 216641.1 1093 11258.452 3 10031.493 21289.944

To be able to give you a reasonable advise, I need
— your JVM start parameters
— excerpt from your GC logs with at least -Xx:+PrintGCDetails enabled
I would also suggest you to post question on stackoverflow.com (and post link here) as it is better platform for that kind of questions.
Hi Alexey, as suggested by you i have posted question on stackoverflow.com and here is the link
Also here is the start up parameters and PrintGCDetails are not enabled and will take time as it is production server.
-Xms6144m -Xmx6144m -XX:MaxPermSize=256m -Djava.security.policy=/bea/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true -da -Dplatform.home=/bea/wlserver_10.3 -Dwls.home=/bea/wlserver_10.3/server -Dweblogic.home=/bea/wlserver_10.3/server -Dweblogic.management.discover=true -Dwlw.iterativeDev=false -Dwlw.testConsole=false -Dwlw.logErrorsToConsole=false -Dweblogic.ext.dirs=/bea/patch_wls1036/profiles/default/sysext_manifest_classpath -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Duser.timezone=GMT -Dfile.encoding=UTF-8 -Duser.language=en -Duser.country=US -Dweblogic.wsee.wstx.wsat.deployed=false -XX:+DisableExplicitGC
Running VMStat for 5 hours has given the following result, i am providing a part of the output:
swap free re mf pi po 40336468 4025208 383 5473 465 59 40336132 4025732 383 5477 465 59 40336020 4025732 383 5478 465 59 40335940 4025752 383 5479 465 59 40335860 4025776 383 5479 465 59 40335776 4025796 383 5480 465 59 40335696 4025816 383 5481 465 59 40335584 4025816 383 5482 464 59 40335504 4025836 383 5483 464 59 40335420 4025856 383 5484 464 59
Can we inference something from this output
I am getting millions of following messages:
54.104: ThreadDump [ 153 2 3 ] [ 0 0 0 0 0 ] 0
vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_count
54.104: ThreadDump [ 153 3 4 ] [ 0 0 0 0 0 ] 0
vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_count
54.104: ThreadDump [ 153 1 6 ] [ 0 0 0 0 0 ] 0
vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_count
54.104: ThreadDump [ 153 2 2 ] [ 0 0 0 0 0 ] 0
vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_count
54.105: ThreadDump [ 153 0 2 ] [ 0 0 0 0 0 ] 0
vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_count
54.105: ThreadDump [ 153 1 6 ] [ 0 0 0 0 0 ] 0
vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_count
What could be the reason of ThreadDump activity in SafePoint?

I would guess, it is result of profiler. Thread dumps are widely used by profiler and sometimes by monitoring tools. Java code could also cause thread dump for itself.
Thanks for your quick updates.
There are no profiler attached to the java process.
I don't see any log of general Threaddumps it means they are Internal Thread dumps as you mentioned.
Обход фроста — системы защиты Frost для Поинт Бланк и других игр

Многие в теме, что 4game тесно сотрудничают с системой защиты от читов Frost (и не только они). Конечно, Фрост дырявый как носки перед 23 февраля, но все-равно иногда у него получается презентовать блокировку на аккаунт. Сейчас мы научимся обходить его и запускать читы для игры без бана в любой игре, где используется эта система (в том числе и в Поинт Бланк).
Инструкция очень проста — нам понадобится специальная программа под названием PCHunter, скачиваем ее:
Пароль: aimcop
1. Затем заходим в нужную игру. На нашем примере это ПБ;
2. Запускаем программу обхода;
3. Переходим во вкладку «Ring0 Hooks«, затем в «Object Type»;
4. Тут нам нужен процесс выделенный красным цветом: \frost\Frost.sys ;
5. Жмем на него правой кнопкой мышки и выбираем UnHook All;
6. Все, теперь можно использовать читы в PB.
Такой обход не только защитить от автоматической блокировки, но и предоставит открытый доступ к процессу игры (это нужно если читерить через Cheat Engine)
Psychosomatic, Lobotomy, Saw
I’ve been giving a couple of talks in the last year about profiling and about the JVM runtime/execution and in both I found myself coming up against the topic of Safepoints. Most people are blissfully ignorant of the existence of safepoints and of a room full of people I would typically find 1 or 2 developers who would have any familiarity with the term. This is not surprising, since safepoints are not a part of the Java language spec. But as safepoints are a part of every JVM implementation (that I know of) and play an important role, here’s my feeble attempt at an overview.
What’s a Safepoint?
- Some GC phases (the Stop The World kind)
- JVMTI stack sampling methods (not always a global safepoint operation for Zing))
- Class redefinition
- Heap dumping
- Monitor deflation (not a global safepoint operation for Zing)
- Lock unbiasing
- Method deoptimization (not always)
- And many more!
- Safepoints are a common JVM implementation detail
- They are used to put mutator threads on hold while the JVM ‘fixes stuff up’
- On OpenJDK/Oracle every safepoint operation requires a global safepoint
- All current JVMs have some requirement for global safepoints
When is my thread at a safepoint?

- A Java thread is at a safepoint if it is blocked on a lock or synchronized block, waiting on a monitor, parked, or blocked on blocking IO. Essentially these all qualify as orderly de-scheduling events for the Java thread and as part of tidying up before put on hold the thread is brought to a safepoint.
- A Java thread is at a safepoint while executing JNI code. Before crossing the native call boundary the stack is left in a consistent state before handing off to the native code. This means that the thread can still run while at a safepoint.
- A Java thread which is executing bytecode is NOT at a safepoint (or at least the JVM cannot assume that it is at a safepoint).
- A Java thread which is interrupted (by the OS) while not at a safepoint is not brought to a safepoint before being de-scheduled.
- The JVM cannot force any thread into a safepoint state.
- The JVM can stop threads from leaving a safepoint state.
Bringing a Java Thread to a Safepoint
- Between any 2 bytecodes while running in the interpreter (effectively)
- On ‘non-counted’ loop back edge in C1/C2 compiled code
- Method entry/exit (entry for Zing, exit for OpenJDK) in C1/C2 compiled code. Note that the compiler will remove these safepoint polls when methods are inlined.
- ‘
‘ or ‘‘ on OpenJDK, this will be in the instructions comments - ‘tls.pls_self_suspend‘ on Zing, this will be the flag examined at the poll operation
- On Oracle/OpenJDK a blind TEST of an address on a special memory page is issued. Blind because it has no branch following it so it acts as a very unobtrusive instruction (usually a TEST is immediately followed by a branch instruction). When the JVM wishes to bring threads to a safepoint it protects the page causing a SEGV which is caught and handled appropriately by the JVM. There is one such special page per JVM and therefore to bring any thread to a safepoint we must bring all threads to a safepoint.
- On Zing the safepoint flag is thread local (hence the tls prefix). Threads can be brought to a safepoint independently.
- Threads can lock themselves out (e.g. by calling into JNI, or blocking at a safepoint).
- Threads can try and re-enter (when returning from JNI), but if the lock is held by the JVM they will block.
- The safepoint operation requests the lock and blocks until it own it (when all the mutators have locked themselves out)
All Together Now
- Safepoint triggers/operations: reasons to request, and stuff to do which requires one or all threads to be at a safepoint
- Safepoint state/lock: into which Java threads can voluntarily check in, but can only leave when the JVM is done executing the safepoint operation
- Safepoint poll: the voluntary act by a Java thread to go into a safepoint if one is needed

- Time To Safe Point(TTSP): Each thread enters a safepoint when it hits a safepoint poll. But arriving at a safepoint poll requires executing an unknown number of instructions. We can see J1 hits a safepoint poll straight away and is suspended. J2 and J3 are contending on the availability of CPU time. J3 grabs some CPU time pushing J2 into the run queue, but J2 is not in a safepoint. J3 arrives at a safepoint and suspends, freeing up the core for J2 to make enough progress to get to a safepoint poll. J4 and J5 are already at a safepoint while executing JNI code, they are not affected. Note that J5 is trying to leave JNI halfway through the safepoint and is suspended before resuming Java code. Importantly we observe that the time to safepoint varies from thread to thread and some threads are paused for longer than others, Java threads which take a long time to get to a safepoint can delay many other threads.
- Cost of the safepoint operation: This will depend on the operation. For GetStackTrace the cost of the operation itself will depend on the depth of the stack for instance. If your profiler is sampling many threads (or all the threads) in one go (e.g. via JVMTI::GetAllStackTraces) the duration will grow as more threads are involved. A secondary effect may be that the JVM will take the opportunity to execute some other safepoint operations while the going is good.
- Cost of resuming the threads.
- Long TTSP can dominate pause times: some causes for long TTSP include page faults, CPU over-subscription, long running counted loops.
- The cost of stopping all threads and starting all threads scales with the number of threads. The more threads the higher the cost. Assuming some non-zero cost to suspend/resume and TTSP, multiply by number of threads.
- Add -XX:+PrintGCApplicationStoppedTime to your list of favourite flags! It prints the pause time and the TTSP.
Example 0: Long TTSP Hangs Application
- Prevent C1/C2 compilation: Use -Xint or a compile command to prevent the method compilation. This will force safepoint polls everywhere and the thread will stop hurting the JVM.
- Replace the index type with long: this will make the JVM consider this loop as ‘uncounted’ and include a safepoint poll in each iteration.
- On Zing you can apply the -XX:+KeepSafepointsInCountedLoops option on either the JVM or method level (via a -XX:CompileCommand=option. or via a compiler oracle file). This option is coming shortly to an OpenJDK near you. See also related bug where breaking the loop into an inner/outer loop with safepoints in the outer loop is discussed.
Example 1: More Running Threads -> Longer TTSP, Higher Pause Times
Example 2: Long TTSP has Unfair Impact
Solution 1: I use Zing -> use -XX:+KeepSafepointsInCountedLoops
SHAZZZING. This is pretty fucking awesome. The impact on contains1toK() is visible, but I didn’t have to change the code and the TTSP issue is gone man, solid GONE. A look at the assembly verifies that the inner loop is still unrolled within the outer loop, with the addition of the requested safepoint polls. There are more safepoint polls than I’d like here, since the option is applied to both the outer method and the inlined method (to be improved upon sometime down the road. ).
Solution 2: I use OpenJDK/Oracle, disable inner loop inlining
Solution 3: Last resort. stop compiling the offending method
Example 3: Safepoint Operation Cost Scale
We get long stops and short TTSPs! hoorah! You can play with the number of threads and the depth of the stack to get the effect you want. It is interesting that most profilers using JVMTI::Get*StackTrace opt for sampling all threads. This is risky to my mind as we can see above the profiling overhead is open ended. It would seem reasonable to sample 1 to X threads (randomly picking the threads we sample) to keep the overhead in check on the one hand and pick a reasonable balance of TTSP vs operation cost on the other.
As mention previously, JVMTI::GetStackTrace does not cause a global safepoint on Zing. Sadly AFAIK no commercial profilers use this method, but if they did it would help. The JMX thread bean does expose a single thread profiling method, but this only delegates to the multi-thread stack trace method underneath, which does cause a global safepoint.
- Safepoint poll interval -> Will drive time to safepoint. CPU scarcity, swapping and page faults are other common reasons for large TTSP.
- Safepoint operation interval -> Sometimes in your control (be careful how you profile/monitor your application), sometimes not.
- Safepoint operation cost -> Will drive time in safepoint. Will depend on the operation and your application specifics. Study your GC logs.
Example 4: Adding Safepoint Polls Stops Optimization
- Loop unrolling : This is not really a safepoint issue, can be fixed as per the KeepSafepointsInCountedLoops option
- Fill pattern replacement (-XX:+OptimizeFill): the current fill implementation has no safepoints and the loop pattern it recognizes does not allow it.
- Superword optimizations (-XX:+UseSuperword): current Superword implementation doesn’t allow for long loops. This will probably require the compiler to construct an outer loop with an inner superword loop
Final Summary And Testament
If you’ve read this far, you deserve a summary, or a beer, or something. WELL DONE YOU. Here’s what I consider the main takeaways from this discussion/topic: