One of the best features in Go is Goroutines. A goroutine is a lightweight thread managed by the Go runtime. Goroutines enable functions to run concurrently.
Imagine if we receive a thousand orders in one second. Should we process them one by one? Think about this: if one process takes 300-500 milliseconds, and we have 1000 orders, here's the estimation for sequential processing (one by one):
For 1000 Orders:
- Minimum total time = 1000 × 300ms = 300,000ms = 300 seconds = 5 minutes
- Maximum total time = 1000 × 500ms = 500,000ms = 500 seconds = 8.33 minutes
- Average total time = 1000 × 400ms = 400,000ms = 400 seconds = 6.67 minutes
This is terrible, right? 🤓
Taking an average of 6 minutes to process a thousand orders is inefficient. This is where Goroutines play a crucial role 🌟
When we process those thousand orders using Goroutines, we switch to a concurrent approach instead of sequential processing. Using Goroutines significantly reduces the processing time for a thousand orders from around 6 minutes to approximately 500 milliseconds. See the difference? It's about 5 minutes faster 🚀
Goroutines are a powerful feature in Go that enable concurrent processing, dramatically reducing execution time. As we saw earlier, processing 1000 orders sequentially takes around 6 minutes, while with Goroutines it takes just 500 milliseconds. However, this power comes with responsibility! 🚀
When multiple Goroutines access and modify shared resources simultaneously, we can encounter race conditions - a situation where the final result becomes unpredictable. Without proper synchronization mechanisms like Mutex or Atomic operations, our lightning-fast concurrent processing could lead to data corruption or memory leaks.
For example, if we don't handle Goroutines properly:
- Data inconsistency due to simultaneous access
- Unpredictable results from concurrent operations
- Memory leaks from improper resource management
- System instability from uncontrolled concurrent access That's why we need synchronization tools like:
- sync.Mutex for locking access to shared resources
- atomic operations for thread-safe operations
- proper error handling and resource cleanup Think of it like installing traffic lights at a busy intersection - yes, it might slow things down a tiny bit, but it prevents accidents and ensures everything runs smoothly! 🚦
Let's use a real-world example from an E-commerce service. Consider a Product struct that contains an ID, product name, and stock quantity:
type Product struct {
ID int
Name string
Stock int32
}
We process orders by simply decreasing the stock based on the order quantity:
// Unsafe version - has race condition
func (p *Product) ProcessOrderUnsafe(orderQuantity int32) bool {
if p.Stock >= orderQuantity {
p.Stock -= orderQuantity
return true
}
return false
}
Let's create a scenario where we have 5 customers, each buying 1 item. We'll group these 5 orders together and process 1000 such groups concurrently.
Starting with an initial stock of 10,000 items, we process 1000 groups of orders. In each group, 5 users each buy one item:
1 + 1 + 1 + 1 + 1 = 5 items per group
1000 groups × 5 items = 5000 total items
Therefore, we expect the stock to decrease by 5000, resulting in a final stock of 5000 (10,000 - 5000).
Here's the test scenario:
func TestOrderUnsafe(t *testing.T) {
currentProduct := Product{
ID: 14045,
Name: "J.Co - Snow White Donuts",
Stock: 10000,
}
userBought := []int32{1, 1, 1, 1, 1}
fmt.Printf("Current Stock: %d\n", currentProduct.Stock)
wg := &sync.WaitGroup{}
for i := 0; i < 1000; i++ {
for _, bought := range userBought {
wg.Add(1)
go func(orderQTY int32) {
defer wg.Done()
currentProduct.ProcessOrderUnsafe(orderQTY)
}(bought)
}
}
wg.Wait()
fmt.Printf("Final inventory (unsafe): %d\n", currentProduct.Stock)
}
Let's run the test using go test -v.
Everything seems fine until we see the results 💣
After running 3 tests, we notice something strange - our final results are different each time, and they keep changing with every test run:
First Test: 5450
Second Test: 5222
Third Test: 5205
Something is seriously wrong here. The final stock should be exactly 5000 (10000 - 5000), but we're getting inconsistent and incorrect results (5450, 5222, 5205). None of these results are correct!
This inconsistency occurs because we've encountered a race condition. When multiple Goroutines try to access and modify the stock value simultaneously without proper synchronization, they interfere with each other's operations. It's like multiple cashiers trying to update the same inventory record at the same time - they might miss some updates or count the same transaction twice.
The race condition happens in these steps:
- Multiple Goroutines read the stock value at the same time
- Each Goroutine thinks it has the correct current value
- They all try to decrease the stock simultaneously
- Some updates are lost or overwritten in the process
- The final result becomes unpredictable and incorrect
This is why we need proper synchronization mechanisms to handle concurrent access to shared resources! 🔒
Mutex Solution
Mutex works like a lock-and-key system. When a Goroutine needs to update the stock, it must first acquire a lock. While one Goroutine holds the lock, all others must wait their turn. The defer mu.Unlock() ensures we never forget to release the lock, preventing deadlocks. It's like having a single key that gets passed around - only the Goroutine holding the key can access the stock value.
// Solution 1: Using mutex
func (p *Product) ProcessOrderWithMutex(orderQuantity int32, mu *sync.Mutex) bool {
mu.Lock()
defer mu.Unlock()
if p.Stock >= orderQuantity {
p.Stock -= orderQuantity
return true
}
return false
}
func TestOrderWithMutex(t *testing.T) {
currentProduct := Product{
ID: 14045,
Name: "J.Co - Snow White Donuts",
Stock: 10000,
}
userBought := []int32{1, 1, 1, 1, 1}
fmt.Printf("Current Stock: %d\n", currentProduct.Stock)
wg := &sync.WaitGroup{}
mu := &sync.Mutex{} // Create single mutex to be shared
for i := 0; i < 1000; i++ {
for _, bought := range userBought {
wg.Add(1)
go func(orderQTY int32) {
defer wg.Done()
currentProduct.ProcessOrderWithMutex(orderQTY, mu) // Pass mutex to method
}(bought)
}
}
wg.Wait()
fmt.Printf("Final inventory (mutex): %d\n", currentProduct.Stock)
}
Here's how it works:
- mu.Lock() blocks other Goroutines from entering this section
- defer mu.Unlock() ensures the lock is released even if errors occur
- Only one Goroutine can modify the stock at a time
- This creates a queue of Goroutines waiting their turn
- The stock updates happen sequentially within the locked section
Here the final test :
/opt/homebrew/Cellar/go/1.22.0/libexec/bin/go tool test2json -t /private/var/folders/q4/lfxksx5x7knd4xjt3qqhyq4m0000gn/T/GoLand/___TestOrderWithMutex_in_race_condition.test -test.v -test.paniconexit0 -test.run ^\QTestOrderWithMutex\E$
=== RUN TestOrderWithMutex
Current Stock: 10000
Final inventory (mutex): 5000
--- PASS: TestOrderWithMutex (0.00s)
PASS
Process finished with the exit code 0
Atomic Solution
Atomic Operations function like a precise surgical tool. They perform operations on variables in a way that can't be interrupted by other Goroutines. When we use atomic operations, each stock update happens in a single, unbreakable step. If multiple Goroutines try to update the stock simultaneously, the atomic Compare-and-Swap (CAS) ensures only one succeeds while others retry. Think of it as a high-speed traffic intersection with sensors that only let one car pass at a time, but so quickly that traffic still flows smoothly.
// Solution 2: Using atomic operations
func (p *Product) ProcessOrderAtomic(orderQuantity int32) bool {
for {
currentInventory := atomic.LoadInt32(&p.Stock)
if currentInventory < orderQuantity {
return false
}
if atomic.CompareAndSwapInt32(&p.Stock, currentInventory, currentInventory-orderQuantity) {
return true
}
}
}
func TestOrderWithAtomic(t *testing.T) {
currentProduct := Product{
ID: 14045,
Name: "J.Co - Snow White Donuts",
Stock: 10000,
}
userBought := []int32{1, 1, 1, 1, 1}
fmt.Printf("Current Stock: %d\n", currentProduct.Stock)
wg := &sync.WaitGroup{}
for i := 0; i < 1000; i++ {
for _, bought := range userBought {
wg.Add(1)
go func(orderQTY int32) {
defer wg.Done()
currentProduct.ProcessOrderAtomic(orderQTY)
}(bought)
}
}
wg.Wait()
fmt.Printf("Final inventory (unsafe): %d\n", currentProduct.Stock)
}
Here's how it works:
- atomic.LoadInt32 safely reads the current stock value
- We check if we have enough stock
- CompareAndSwapInt32 (CAS) atomically updates the stock only if no other Goroutine has modified it
- If another Goroutine changed the value, the CAS fails and we retry the whole operation
- This ensures all stock updates are processed accurately
Here the final test :
/opt/homebrew/Cellar/go/1.22.0/libexec/bin/go tool test2json -t /private/var/folders/q4/lfxksx5x7knd4xjt3qqhyq4m0000gn/T/GoLand/___TestOrderWithAtomic_in_race_condition.test -test.v -test.paniconexit0 -test.run ^\QTestOrderWithAtomic\E$
=== RUN TestOrderWithAtomic
Current Stock: 10000
Final inventory (unsafe): 5000
--- PASS: TestOrderWithAtomic (0.00s)
PASS
Process finished with the exit code 0
Both solutions solve our race condition problem, consistently giving us the correct final stock of 5000. 😄
The main difference lies in their approach: Atomic operations are generally faster for simple operations as they don't require full locks, while Mutex provides a more straightforward solution that's easier to understand and maintain, especially for complex operations involving multiple steps. The choice between them often depends on your specific use case and performance requirements. 🔒
Author Of article : Mufthi Ryanda Read full article