visit
In , green threads or virtual threads are that are scheduled by a or (VM) instead of natively by the underlying (OS).
- Wikipedia
To summarize go scheduler
works inside go runtime
inside user space and using OS threads. Goroutines are running in the context of OS threads.
Until version 1.14 Go had only cooperative scheduling. It means that goroutine decides by itself when to free the resources for any reason(like call of the function any IO operations, waiting for a mutex, reading from the channel, and so on). And that may cause a problem of a single goroutine hogging CPU and does not reach any of the reasons above. So in 1.14 asynchronous preemption was introduced. Asynchronous preemption is triggered based on a time condition. When a goroutine is running for more than 10 seconds, the Go scheduler will try to preempt it.
To create a goroutine we need to use the keyword go
like so:
go func() {
//logic of concurrent function
}()
package main
import (
"fmt"
"runtime"
)
func main() {
runtime.GOMAXPROCS(1)
i := 0
go func(i *int) {
for {
*i++
}
}(&i)
runtime.Gosched()
fmt.Println(i)
}
Do not communicate by sharing memory; instead, share memory by communicating.
What does it mean? Working with concurrent programs is always not easy at all because you should always keep in mind race conditions, deadlocks, and other issues. Go introduces channels to handle this issue. Channel is a type of communication between goroutines. It has a type (int, string, some struct) and is created by a keyword make
.
make(ch chan int)
ch <- 2 // write
v := <- ch // read and assign result to variable v
for v := range ch {
}
close(ch)
package main
import (
"fmt"
"net/http"
)
func main() {
websites := []string{
"//gzht888.com/",
"//github.com/",
"//apple.com/",
"//google.com/",
"//youtube.com/",
"//www.udemy.com/",
"//netflix.com/",
"//www.coursera.org/",
"//facebook.com/",
"//microsoft.com",
"//wikipedia.org",
"//educative.io",
"//acloudguru.com",
}
for _, website := range websites {
checkResource(website)
}
}
func checkResource(website string) {
if res, err := http.Get(website); err != nil {
fmt.Println(website, "is down")
} else {
fmt.Printf("[%d] %s is up\n", res.StatusCode, website)
}
}
[200] //gzht888.com/ is up [200] is up [200] is up [200] is up [200] is up [200] is up [200] is up [200] is up [200] is up [200] is up [200] is up [200] is up [200] is up
func worker(resources, results chan string) {
for resource := range resources {
if res, err := http.Get(resource); err != nil {
results <- resource + " is down"
} else {
results <- fmt.Sprintf("[%d] %s is up", res.StatusCode, resource)
}
}
}
Let us quickly find out what exactly is happening here. Each worker will wait for a resource of the website from the channel resources
and right after someone will push a resource URL to the channel worker will receive this URL and check if it is ok or not and push the result to another channel called results
.
func main() {
websites := []string{
//...
}
resources := make(chan string, 6)
results := make(chan string)
for i := 0; i < 6; i++ {
go worker(resources, results)
}
}
go func() {
for _, v := range websites {
resources <- v
}
}()
Why shouldn’t we use here a synchronous inline code? When, you may try to take final example removed
go
and you will catch a .
Now we not only have our worker pool but also provide them with a work :) As the last thing we need to do is to read the results from the pool. To do that we can iterate through results
channel in main goroutine and print all results of checking each website:
for i := 0; i < len(websites); i++ {
fmt.Println(<-results)
}
package main
import (
"fmt"
"net/http"
)
func main() {
websites := []string{
"//gzht888.com/",
"//github.com/",
"//apple.com/",
"//google.com/",
"//youtube.com/",
"//www.udemy.com/",
"//netflix.com/",
"//www.coursera.org/",
"//facebook.com/",
"//microsoft.com",
"//wikipedia.org",
"//educative.io",
"//acloudguru.com",
}
resources := make(chan string, 6)
results := make(chan string)
for i := 0; i < 6; i++ {
go worker(resources, results)
}
go func() {
for _, v := range websites {
resources <- v
}
}()
for i := 0; i < len(websites); i++ {
fmt.Println(<-results)
}
}
func worker(resources, results chan string) {
for resource := range resources {
if res, err := http.Get(resource); err != nil {
results <- resource + " is down"
} else {
results <- fmt.Sprintf("[%d] %s is up", res.StatusCode, resource)
}
}
}