Synchronize block in java

Java Synchronized Block for .class

The snippet synchronized(X.class) uses the class instance as a monitor. As there is only one class instance (the object representing the class metadata at runtime) one thread can be in this block.

With synchronized(this) the block is guarded by the instance. For every instance only one thread may enter the block.

synchronized(X.class) is used to make sure that there is exactly one Thread in the block. synchronized(this) ensures that there is exactly one thread per instance. If this makes the actual code in the block thread-safe depends on the implementation. If mutate only state of the instance synchronized(this) is enough.

«as many threads may enter the block as there are instance» implies that the second form acts as a semaphore which is not true. You should say something like: «synchronised(this) ensures that only one thread can enter the block for a given instance of the class».

So, if you have a static method and we don’t want to synchronize all of its body, then we synchronized(this) is not good, instead synchronized(Foo.class) is appropriate. Is that right?

synchronized(X.class) is used to make sure that there is exactly one Thread in the block. this is false, it depends on how many classloaders you have

To add to the other answers:

static synchronized void myMethod() < //code >
synchronized void myMethod() < //code >

It took me a second reading to catch that the first two examples have the keyword «static». Just pointing that out to others who may have seen this and missed it. Without the static keyword the first two examples would not be the same.

Those examples are NOT equivalent! The synchronized methods are «synchronized» as a hole when a thread tries to call the methods. The blocks on the other hand, can have code above and below them, that can get executed from multiple threads. They only synchronize within the block! That is not the same!

The whole point is that there is no code outside the synchronized blocks. That makes them equivalent. If you change one example, they are indeed no longer the same.

Just adding that synchronized(MyClass.class) can also be used from other classes, not just from withing MyClass. I have a legacy code where several classes use a method from one class (let’s say Foo.saveStuff). I need to make sure that only one thread uses saveStuff at a time. Due to bad DB transaction design I can’t just make safeStuff synchronized so I have to use synchronized(Foo.class) in all the other methods.

No, the first will get a lock on the class definition of MyClass , not all instances of it. However, if used in an instance, this will effectively block all other instances, since they share a single class definition.

Читайте также:  Firebase отправка push уведомлений php

The second will get a lock on the current instance only.

As to whether this makes your objects thread safe, that is a far more complex question — we’d need to see your code!

Yes it will (on any synchronized block/function).

I was wondering about this question for couple days for myself (actually in kotlin). I finally found good explanation and want to share it:

Class level lock prevents multiple threads to enter in synchronized block in any of all available instances of the class on runtime. This means if in runtime there are 100 instances of DemoClass, then only one thread will be able to execute demoMethod() in any one of instance at a time, and all other instances will be locked for other threads.

Class level locking should always be done to make static data thread safe. As we know that static keyword associate data of methods to class level, so use locking at static fields or methods to make it on class level.

Plus to notice why .class. It is just because .class is equivalent to any static variable of class similar to:

private final static Object lock = new Object(); 

where lock variable name is class and type is Class

Источник

Intrinsic Locks and Synchronization

Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. (The API specification often refers to this entity simply as a «monitor.») Intrinsic locks play a role in both aspects of synchronization: enforcing exclusive access to an object’s state and establishing happens-before relationships that are essential to visibility.

Every object has an intrinsic lock associated with it. By convention, a thread that needs exclusive and consistent access to an object’s fields has to acquire the object’s intrinsic lock before accessing them, and then release the intrinsic lock when it’s done with them. A thread is said to own the intrinsic lock between the time it has acquired the lock and released the lock. As long as a thread owns an intrinsic lock, no other thread can acquire the same lock. The other thread will block when it attempts to acquire the lock.

When a thread releases an intrinsic lock, a happens-before relationship is established between that action and any subsequent acquisition of the same lock.

Locks In Synchronized Methods

When a thread invokes a synchronized method, it automatically acquires the intrinsic lock for that method’s object and releases it when the method returns. The lock release occurs even if the return was caused by an uncaught exception.

Читайте также:  Php ini ubuntu extension

You might wonder what happens when a static synchronized method is invoked, since a static method is associated with a class, not an object. In this case, the thread acquires the intrinsic lock for the Class object associated with the class. Thus access to class’s static fields is controlled by a lock that’s distinct from the lock for any instance of the class.

Synchronized Statements

Another way to create synchronized code is with synchronized statements. Unlike synchronized methods, synchronized statements must specify the object that provides the intrinsic lock:

public void addName(String name) < synchronized(this) < lastName = name; nameCount++; >nameList.add(name); >

In this example, the addName method needs to synchronize changes to lastName and nameCount , but also needs to avoid synchronizing invocations of other objects’ methods. (Invoking other objects’ methods from synchronized code can create problems that are described in the section on Liveness.) Without synchronized statements, there would have to be a separate, unsynchronized method for the sole purpose of invoking nameList.add .

Synchronized statements are also useful for improving concurrency with fine-grained synchronization. Suppose, for example, class MsLunch has two instance fields, c1 and c2 , that are never used together. All updates of these fields must be synchronized, but there’s no reason to prevent an update of c1 from being interleaved with an update of c2 — and doing so reduces concurrency by creating unnecessary blocking. Instead of using synchronized methods or otherwise using the lock associated with this , we create two objects solely to provide locks.

public class MsLunch < private long c1 = 0; private long c2 = 0; private Object lock1 = new Object(); private Object lock2 = new Object(); public void inc1() < synchronized(lock1) < c1++; >> public void inc2() < synchronized(lock2) < c2++; >> >

Use this idiom with extreme care. You must be absolutely sure that it really is safe to interleave access of the affected fields.

Reentrant Synchronization

Recall that a thread cannot acquire a lock owned by another thread. But a thread can acquire a lock that it already owns. Allowing a thread to acquire the same lock more than once enables reentrant synchronization. This describes a situation where synchronized code, directly or indirectly, invokes a method that also contains synchronized code, and both sets of code use the same lock. Without reentrant synchronization, synchronized code would have to take many additional precautions to avoid having a thread cause itself to block.

Источник

How are synchronized blocks implemented in Java?

At some point threads will contend for the monitor, at this point one thread should win, does Java use atomic CAS operations built into the CPU to achieve the acquisition of these monitors, if not how does this work?

Why does it matter? synchronized delivers its concurrency guarantees using whatever platform-specific way that is needed to ensure its goals; in systems that doesn’t support concurrency at all or when Java could prove that the method would never be accessed concurrently it could even be optimized into a noop. How exactly they’re implemented is implementation detail that you wouldn’t need to worry about, if your Java implementation doesn’t deliver that guarantee then it would have to either be a bug or (rarely) a documented deviation from the standard.

Читайте также:  Вывести следующее четное число python

Yes but i’m interested in how this works, reason being is that synchronized blocks are prefered over java CAS operations when contention is high, as CAS use more cpu time in high contention periods, however I believe Java synchronixed must use a CAS operation at the CPU level in order to correctly aquire locks, if this is true then this would create the same problem as using java CAS operations over Java syncrhonization.

2 Answers 2

I don’t think so since in the concurrent packages you can find the Atomic* classes which use CAS internally.

Another thing is that it depends on what kind of jvm you use. So in its current form your question is not really answerable apart from telling you that CAS is used elswhere.

CAS is what makes all concurrency work at the hardware level. If you want to change one value in memory across all threads, CAS is the fastest way to do it; any other technique is going to use CAS also. So for quick changes, CAS is the way to go. But if you have 100, or even 5 values to change, you’re likely better off using synchronization. It’ll do one CAS to lock the monitor and another to unlock it, but the rest is normal memory reads and writes which are much faster than CAS. Of course, you do have the monitor locked, which may hang other threads, slowing your program and possibly wasting CPU.

A bigger concern is that in Java any CAS (or reading/writing volatiles and synching/unsynching) is accompanied by bringing other threads’ views of memory up-to-date. When you write a volatile, the threads that read it see all the memory changes made by writing the thread. This involves dumping register values to memory, flushing caches, updating caches, and putting data back into registers. But these costs parallel CAS, so if you’ve got one figured out, you’ve got the other figured out too.

The basic idea, I think, from the programmers’ point of view, is to use volatile or atomic operations for single reads and writes and synchronization for multiples—if there’s no other compelling reason to chose one over the other.

Источник

Оцените статью