After speaking with Marius Gedminas at freenode, he gave me enough hints to rewrite my previous async view example with locks instead of Value, which is prone to race conditions. I also added a queue to allow jobs to wait for being processed.

from multiprocessing import Process, Lock, Queue

job = 0
q = Queue(maxsize=3)
lock = Lock()

def work():
    import time; time.sleep(8)
    job = q.get()
    print("Job done: {0}".format(job))
    print("Queue size: {0}\n".format(q.qsize()))
    if not q.empty():
        work()
    else:
        lock.release()

def my_view(request):
    global job
    if not q.full():
        job += 1
        q.put(job)
        # Not running
        if lock.acquire(False):
            Process(target=work).start()
            print("Job {0} submitted and working on it".format(job))
        else:
            print("Job {0} submitted while working".format(job))
    else:
        print("Queue is full")
    print("Queue size: {0}\n".format(q.qsize()))
    return {'project':'asyncapp'}

With every request a job is sent. Here the queue accepts 3 jobs. The recursion in work makes sure there is only 1 process working at a time.

I will leave my previous example with Value because it’s easier to understand but this version is much safer.

Update: You can avoid the use of locks by using 2 queues.