C++ Futures

Recently I learnt about Alice, a dialect of Standard ML and thus a mostly functional language. What Alice adds to ML is excellent support for high-level concurrency primitives such as Futures.

A Future is a mechanism which can be used to provide values that will be resolved to another value asynchronously. Futures can also be called “promises” (see PromisePipelining). A method/function can return a future to its caller, and continue to compute the value that the future will resolve to in another thread of control […]” Wiki

Alice’s syntax for futures is especially pleasing. To give an example, an expression like ‘spawn exp;‘ would evaluate ‘exp’ asynchronously in a new thread and return immediately. Requesting the value of ‘exp’ results in implicit synchronization with the thread evaluating the result until completed. Later invocations return the result directly.

Alice is not the only language supporting futures. Even Java version 1.5 got a FutureTask although the usual restrictions of Java apply here too.

I could not find an implementation of a future for C++ maybe because multithreading is not standardized yet. However it is not difficult to implement one. I will use Boost.Threads as the underlying threading framework, but other libraries are just as good for this particular task.

Please note that the definition of futures does not specify if the computation of the given expression would start immediately or only when the result is requested the first time. I let the evaluation begin on future creation but an explicit start function could be added easily.

Now here is the complete future implementation (Did I mention than Vim‘s HTML export is really handy?):

 1 #ifndef FUTURE_HPP
 2 #define FUTURE_HPP
 3 
 4 #include <tr1/functional>
 5 #include <boost/bind.hpp>
 6 #include <boost/utility.hpp>
 7 #include <boost/thread/mutex.hpp>
 8 #include <boost/thread/thread.hpp>
 9 
10 using std::tr1::function;
11 using boost::thread;
12 using boost::mutex;
13 
14 template<typename T>
15 class future : boost::noncopyable
16 {
17 public:
18         future(function<T ()> const & f);
19         T operator()();
20 
21 private:
22         void run();
23 
24         T v_;
25         function<T ()> f_;
26         thread t_;
27         mutex m_;
28         bool joined_;
29 };
30 
31 template<typename T>
32 future<T>::future(function<T ()> const & f)
33 : v_(), f_(f), t_(boost::bind(&future<T>::run, this)), joined_(false) {}
34 
35 template<typename T>
36 T future<T>::operator()()
37 {
38         mutex::scoped_lock lock(m_);
39         if (!joined_) {
40                 t_.join();
41                 joined_ = true;
42         }
43         return v_;
44 }
45 
46 template<typename T>
47 void future<T>::run()
48 {
49         v_ = f_();
50 }
51 
52 #endif // FUTURE_HPP
53 

I think the code is reasonably clear. We take a tr1::function and immediately start a thread evaluating the function. Now it should be obvious that the expressive power of this future is not as high as Alice’s since we can only evaluate functions, member functions or functors (classes that overload operator()()) and not simple expressions like 2 + 3 but most non-trivial code is contained in functions anyway. If one of my favourite C++0x proposals makes it into the next standard revision we may even adapt the future to handle lambda expressions.

Now turning to the client side it is trivial to use the future for asynchronous evaluation. In the sample code below I simulate work with sleep(). Two functions (func1() and func2()) do heavy work and two other functions use these functions in futures. The first one (immediate_request()) requests the value of the future immediately and thus blocks until the future finishes evaluating the result. The second call returns the result then immediately. The function doing_work_itm() also creates a future but works hard before requesting the result which is already available by the time the work was done (timing issues aside).

 1 #include <iostream>
 2 #include <unistd.h>
 3 #include <boost/thread/thread.hpp>
 4 #include "future.hpp"
 5 
 6 using std::cout;
 7 using std::endl;
 8 using std::flush;
 9 
10 int func1() { sleep(15); return 42; }
11 int func2() { sleep(15); return 42; }
12 
13 void immediate_request()
14 {
15         future<int> f1(func1);
16         cout << "Requesting result of f1 ..." << endl;
17         cout << "f1 = " << f1() << endl;
18 }
19 
20 void doing_work_itm()
21 {
22         future<int> f2(func2);
23         cout << "Doing work ..." << endl;
24         sleep(15);
25         cout << "Requesting result of f2 ..." << endl;
26         cout << "f2 = " << f2() << endl;
27 }
28 
29 int main()
30 {
31         boost::thread t1(immediate_request);
32         boost::thread t2(doing_work_itm);
33         t1.join();
34         t2.join();
35         return 0;
36 }
37 

The nice thing about using futures is obviously the implicit data driven synchronization.

8 comments so far

  1. Gert-Jan on

    I really like the idea of a future. Nice example too, but why not just:

    template
    class future : boost::noncopyable
    {
    public:
    explicit future(function const & f) : v_(), f_(f), t_(boost::bind(&future::run(), this)) {}
    T operator()() { t_.join(); return v_; }

    private:
    void run() { v_ = f_(); }

    T v_;
    function f_;
    thread t_;
    };

    You see: thread.join() implements the thread state synchronization for you. op() just join()s and returns v_.

  2. resonanz on

    Thanks for your comment Gert-Jan. Joining the thread seems indeed a possible and better alternative. However there are some things to consider.

    Boost.Threads on Unix systems is layered on top of POSIX threads, i.e. boost::thread::join() calls pthread_join() which is not thread-safe:

    “The results of multiple simultaneous calls to pthread_join() specifying the same target thread are undefined.” (http://www.opengroup.org/onlinepubs/007908799/xsh/pthread_join.html)

    So we need a mutex to synchronize thread access in operator()().

    The other issue is that joining the thread requires that the thread is joinable, i.e. join() should be called only once.

    With these issues in mind we could add a mutex lock to operator()() and ensure that join() is called exactly once, e.g.

    template
    T future::operator()()
    {
    mutex::scoped_lock lock(m_);
    if (!joined_) {
    t_.join();
    joined_ = true;
    }
    return v_;
    }

    where the instance variable ‘joined_’ is a simple boolean value initialized to false.

    Other than that I like this version using join(). It does away the additional lock, the enum and the busy wait – thanks again.

  3. resonanz on

    Updated the future code.

  4. casinosfreebonusesv on

    deposit free bonus casino
    see to signature…

  5. freeeeringtones on

    popular free ringtones

    http://www.thehotstop.info

    signature…

  6. Maximus on

    I would like to see a continuation of the topic

  7. House Hold Sensors on

    ,”: that seems to be a great topic, i really love it :’~


Leave a reply to freeeeringtones Cancel reply