Merge lp:~jobh/dolfin/parallel-dirichlet into lp:~fenics-core/dolfin/trunk

Proposed by Joachim Haga
Status: Merged
Merge reported by: Garth Wells
Merged at revision: not available
Proposed branch: lp:~jobh/dolfin/parallel-dirichlet
Merge into: lp:~fenics-core/dolfin/trunk
Diff against target: 934 lines (+352/-67)
10 files modified
dolfin/common/MPI.cpp (+30/-1)
dolfin/common/MPI.h (+114/-16)
dolfin/fem/DirichletBC.cpp (+92/-22)
dolfin/fem/DirichletBC.h (+25/-8)
dolfin/fem/DofMap.cpp (+22/-1)
dolfin/fem/DofMap.h (+23/-1)
dolfin/fem/DofMapBuilder.cpp (+27/-3)
dolfin/fem/DofMapBuilder.h (+5/-0)
dolfin/fem/GenericDofMap.h (+10/-1)
dolfin/fem/SystemAssembler.cpp (+4/-14)
To merge this branch: bzr merge lp:~jobh/dolfin/parallel-dirichlet
Reviewer Review Type Date Requested Status
Garth Wells Approve
Review via email: mp+95179@code.launchpad.net

Description of the change

This branch does the following to Dirichlet boundary conditions:

(a) Make DirichletBC pointwise search faster (by not doing unnecessary initialisation of facets, and only visiting vertices once).

(b) Make DirichletBC topological and geometric search safe in parallel for SystemAssembler (by adding a gather() method which exchanges boundary values with neighbour processes).

==

(a) is a simple change, by just delaying initialisation of facets until they are required. The timings for Poisson on a 48^3 cube are:

         before [s] after [s]

Topological search, compiled boundary subdomain
          20.3 19.9

Topological search, python boundary subdomain
          29.2 29.5

Pointwise search, compiled boundary subdomain
          21.4 19.9

Pointwise search, python boundary subdomain
          46.4 20.5 (!)

The recorded times include mesh create, 2x assemble, 2x DirichletBC create, 2x DirichletBC apply. The times do not change very much except for the case of pointwise search with a Python subdomain, which is dramatically improved. Pointwise search is now (much) faster than topological search when using Python subdomains.

(b) is more involved, and includes adding neighbour / dof sharing information to DofMap, and MPI functionality for exchanging data with neighbours (a new distribute() call, and a new send_recv because the current one deadlocks when posting different number of send_recvs on different processes).

It passes the unit and system tests (except KrylovSolver, but that one fails for me in trunk too).

To post a comment you must log in.
Revision history for this message
Garth Wells (garth-wells) wrote :

This crashes the auto-adaptive-poisson demo, which is probably due to a bug in the adaptive solver (see below). I suspect that the solver fails to properly initialise the mesh, but the the initialisation was previously done by chance in the bc code.

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
*** https://answers.launchpad.net/dolfin
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error: Unable to initialize mapping of degrees of freedom.
*** Reason: Missing entities of dimension 1. Try calling mesh.init(1).
*** Where: This error was encountered inside DofMap.cpp.
*** -------------------------------------------------------------------------

review: Needs Fixing
Revision history for this message
Garth Wells (garth-wells) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'dolfin/common/MPI.cpp'
2--- dolfin/common/MPI.cpp 2011-11-18 12:14:50 +0000
3+++ dolfin/common/MPI.cpp 2012-02-29 14:31:44 +0000
4@@ -19,9 +19,10 @@
5 // Modified by Anders Logg 2007-2011
6 // Modified by Ola Skavhaug 2008-2009
7 // Modified by Niclas Jansson 2009
8+// Modified by Joachim B Haga 2012
9 //
10 // First added: 2007-11-30
11-// Last changed: 2011-08-25
12+// Last changed: 2012-02-29
13
14 #include <numeric>
15 #include <dolfin/log/dolfin_log.h>
16@@ -145,6 +146,23 @@
17 return r + (index - r * (n + 1)) / n;
18 }
19 //-----------------------------------------------------------------------------
20+dolfin::MPINonblocking::MPINonblocking()
21+{}
22+//-----------------------------------------------------------------------------
23+dolfin::MPINonblocking::~MPINonblocking()
24+{
25+ wait_all();
26+}
27+//-----------------------------------------------------------------------------
28+void dolfin::MPINonblocking::wait_all()
29+{
30+ if (!reqs.empty())
31+ {
32+ boost::mpi::wait_all(reqs.begin(), reqs.end());
33+ reqs.clear();
34+ }
35+}
36+//-----------------------------------------------------------------------------
37 //-----------------------------------------------------------------------------
38 #else
39 //-----------------------------------------------------------------------------
40@@ -217,4 +235,15 @@
41 return 0;
42 }
43 //-----------------------------------------------------------------------------
44+dolfin::MPINonblocking::Nonblocking()
45+{}
46+//-----------------------------------------------------------------------------
47+dolfin::MPINonblocking::~Nonblocking()
48+{}
49+//-----------------------------------------------------------------------------
50+void dolfin::MPINonblocking::wait_all()
51+{
52+ error_no_mpi("call MPINonblocking::wait_all");
53+}
54+
55 #endif
56
57=== modified file 'dolfin/common/MPI.h'
58--- dolfin/common/MPI.h 2011-12-04 11:33:42 +0000
59+++ dolfin/common/MPI.h 2012-02-29 14:31:44 +0000
60@@ -18,9 +18,10 @@
61 // Modified by Ola Skavhaug 2008-2009
62 // Modified by Anders Logg 2008-2011
63 // Modified by Niclas Jansson 2009
64+// Modified by Joachim B Haga 2012
65 //
66 // First added: 2007-11-30
67-// Last changed: 2011-09-28
68+// Last changed: 2012-02-29
69
70 #ifndef __MPI_DOLFIN_WRAPPER_H
71 #define __MPI_DOLFIN_WRAPPER_H
72@@ -31,6 +32,7 @@
73 #include <dolfin/log/dolfin_log.h>
74
75 #ifdef HAS_MPI
76+#include <boost/serialization/utility.hpp>
77 #include <boost/mpi.hpp>
78 #include <mpi.h>
79 #endif
80@@ -58,6 +60,32 @@
81 };
82 #endif
83
84+ /// This class provides stateful (single communicator) non-blocking MPI functionality.
85+
86+ class MPINonblocking
87+ {
88+ public:
89+ /// Create instance with associated MPI_WORLD communicator
90+ MPINonblocking();
91+
92+ /// Destroy instance (waits for outstanding requests)
93+ ~MPINonblocking();
94+
95+ /// Non-blocking send and receive
96+ template<typename T>
97+ void send_recv(const T& send_value, uint dest,
98+ T& recv_value, uint source);
99+
100+ /// Wait for all requests to finish
101+ void wait_all();
102+
103+ #ifdef HAS_MPI
104+ private:
105+ MPICommunicator mpi_comm;
106+ std::vector<boost::mpi::request> reqs;
107+ #endif
108+ };
109+
110 /// This class provides utility functions for easy communcation with MPI.
111
112 class MPI
113@@ -100,13 +128,22 @@
114 distribute(in_values, destinations, out_values, sources);
115 }
116
117+ /// Distribute local arrays on a group of processes (typically neighbours
118+ /// from GenericDofMap::neighbours()). It is important that each process'
119+ /// group includes exactly the processes that has it in their groups,
120+ /// otherwise it will deadlock.
121+ template<typename T>
122+ static void distribute(const std::set<uint> group,
123+ const std::map<uint,T>& in_values_per_dest,
124+ std::map<uint,T>& out_values_per_src);
125+
126 // Broadcast value from broadcaster process to all processes
127 template<typename T>
128 static void broadcast(T& value, uint broadcaster=0)
129 {
130 #ifdef HAS_MPI
131 MPICommunicator mpi_comm;
132- boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);
133+ boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
134 boost::mpi::broadcast(comm, value, broadcaster);
135 #endif
136 }
137@@ -118,7 +155,7 @@
138 {
139 #ifdef HAS_MPI
140 MPICommunicator mpi_comm;
141- boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);
142+ boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
143 boost::mpi::scatter(comm, in_values, out_value, sending_process);
144 #else
145 dolfin_assert(sending_process == 0);
146@@ -134,7 +171,7 @@
147 {
148 #ifdef HAS_MPI
149 MPICommunicator mpi_comm;
150- boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);
151+ boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
152 boost::mpi::gather(comm, in_value, out_values, receiving_process);
153 #else
154 out_values.clear();
155@@ -148,7 +185,7 @@
156 {
157 #ifdef HAS_MPI
158 MPICommunicator mpi_comm;
159- boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);
160+ boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
161 boost::mpi::all_gather(comm, in_value, out_values);
162 #else
163 out_values.clear();
164@@ -191,7 +228,7 @@
165 {
166 #ifdef HAS_MPI
167 MPICommunicator mpi_comm;
168- boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);
169+ boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
170 T out;
171 boost::mpi::all_reduce(comm, value, out, op);
172 return out;
173@@ -207,20 +244,17 @@
174 /// reduction op)
175 static uint global_offset(uint range, bool exclusive);
176
177- /// Send-receive and data
178+ /// Send-receive data. Note that if the number of posted send-receives may
179+ /// differ between processes, another interface (such as MPINonblocking::send_recv)
180+ /// must be used since duplicating the communicator requires participation
181+ /// from all processes.
182 template<typename T>
183 static void send_recv(const T& send_value, uint dest,
184 T& recv_value, uint source)
185 {
186 #ifdef HAS_MPI
187- MPICommunicator mpi_comm;
188- boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);
189-
190- // Non-blocking send-receive
191- boost::mpi::request reqs[2];
192- reqs[0] = comm.isend(dest, 0, send_value);
193- reqs[1] = comm.irecv(source, 0, recv_value);
194- boost::mpi::wait_all(reqs, reqs + 2);
195+ MPINonblocking mpi;
196+ mpi.send_recv(send_value, dest, recv_value, source);
197 #else
198 dolfin_error("MPI.h",
199 "call MPI::send_recv",
200@@ -246,6 +280,13 @@
201
202 private:
203
204+ #ifndef HAS_MPI
205+ static void error_no_mpi(const char *where)
206+ {
207+ dolfin_error("MPI.h", msg, "DOLFIN has been configured without MPI support");
208+ }
209+ #endif
210+
211 #ifdef HAS_MPI
212 // Return MPI data type
213 template<typename T> static MPI_Datatype mpi_type()
214@@ -259,6 +300,7 @@
215
216 };
217
218+
219 #ifdef HAS_MPI
220 // Specialisations for MPI_Datatypes
221 template<> inline MPI_Datatype MPI::mpi_type<double>() { return MPI_DOUBLE; }
222@@ -337,7 +379,63 @@
223 "DOLFIN has been configured without MPI support");
224 }
225 #endif
226- //---------------------------------------------------------------------------
227+
228+ //-----------------------------------------------------------------------------
229+ template<typename T>
230+ void dolfin::MPI::distribute(const std::set<uint> group,
231+ const std::map<uint,T>& in_values_per_dest,
232+ std::map<uint,T>& out_values_per_src)
233+ {
234+ #ifdef HAS_MPI
235+ typedef typename std::map<uint,T>::const_iterator map_const_iterator;
236+ typedef typename std::map<uint,T>::iterator map_iterator;
237+ dolfin::MPINonblocking mpi;
238+ const T no_data;
239+
240+ // Send and receive values to all processes in groups (non-blocking). If a
241+ // given process is not found in in_values_per_dest, send empty data.
242+
243+ out_values_per_src.clear();
244+ for (std::set<uint>::const_iterator dest = group.begin(); dest != group.end(); ++dest)
245+ {
246+ map_const_iterator values = in_values_per_dest.find(*dest);
247+ if (values != in_values_per_dest.end())
248+ mpi.send_recv(values->second, *dest, out_values_per_src[*dest], *dest);
249+ else
250+ mpi.send_recv(no_data, *dest, out_values_per_src[*dest], *dest);
251+ }
252+
253+ // Wait for all MPI calls before modifying out_values_per_src
254+
255+ mpi.wait_all();
256+
257+ // Remove received no_data entries.
258+
259+ map_iterator it = out_values_per_src.begin();
260+ while (it != out_values_per_src.end())
261+ {
262+ map_iterator tmp = it++;
263+ if (tmp->second.empty())
264+ out_values_per_src.erase(tmp); // map::erase only invalidates current iterator
265+ }
266+ #else
267+ error_no_mpi("call MPI::distribute");
268+ #endif
269+ }
270+
271+ //-----------------------------------------------------------------------------
272+ template<typename T>
273+ void dolfin::MPINonblocking::send_recv(const T& send_value, uint dest,
274+ T& recv_value, uint source)
275+ {
276+ #ifdef HAS_MPI
277+ boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
278+ reqs.push_back(comm.isend(dest, 0, send_value));
279+ reqs.push_back(comm.irecv(source, 0, recv_value));
280+ #else
281+ error_no_mpi("call MPINonblocking::send_recv");
282+ #endif
283+ }
284
285 }
286
287
288=== modified file 'dolfin/fem/DirichletBC.cpp'
289--- dolfin/fem/DirichletBC.cpp 2011-12-15 00:03:23 +0000
290+++ dolfin/fem/DirichletBC.cpp 2012-02-29 14:31:44 +0000
291@@ -18,13 +18,15 @@
292 // Modified by Kristian Oelgaard, 2008
293 // Modified by Martin Sandve Alnes, 2008
294 // Modified by Johan Hake, 2009
295+// Modified by Joachim B Haga, 2009
296 //
297 // First added: 2007-04-10
298-// Last changed: 2011-09-19
299+// Last changed: 2012-02-29
300
301 #include <map>
302 #include <utility>
303 #include <boost/assign/list_of.hpp>
304+#include <boost/serialization/utility.hpp>
305
306 #include <dolfin/common/constants.h>
307 #include <dolfin/common/Array.h>
308@@ -67,7 +69,6 @@
309 {
310 check();
311 parameters = default_parameters();
312- init_from_sub_domain(_user_sub_domain);
313 }
314 //-----------------------------------------------------------------------------
315 DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,
316@@ -80,7 +81,6 @@
317 {
318 check();
319 parameters = default_parameters();
320- init_from_sub_domain(_user_sub_domain);
321 }
322 //-----------------------------------------------------------------------------
323 DirichletBC::DirichletBC(const FunctionSpace& V, const GenericFunction& g,
324@@ -89,11 +89,12 @@
325 : BoundaryCondition(V),
326 Hierarchical<DirichletBC>(*this),
327 g(reference_to_no_delete_pointer(g)),
328- _method(method)
329+ _method(method),
330+ _user_mesh_function(reference_to_no_delete_pointer(sub_domains)),
331+ _user_sub_domain_marker(sub_domain)
332 {
333 check();
334 parameters = default_parameters();
335- init_from_mesh_function(sub_domains, sub_domain);
336 }
337 //-----------------------------------------------------------------------------
338 DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,
339@@ -103,22 +104,23 @@
340 std::string method)
341 : BoundaryCondition(V),
342 Hierarchical<DirichletBC>(*this),
343- g(g), _method(method)
344+ g(g), _method(method),
345+ _user_mesh_function(sub_domains),
346+ _user_sub_domain_marker(sub_domain)
347 {
348 check();
349 parameters = default_parameters();
350- init_from_mesh_function(*sub_domains, sub_domain);
351 }
352 //-----------------------------------------------------------------------------
353 DirichletBC::DirichletBC(const FunctionSpace& V, const GenericFunction& g,
354 uint sub_domain, std::string method)
355 : BoundaryCondition(V),
356 Hierarchical<DirichletBC>(*this),
357- g(reference_to_no_delete_pointer(g)), _method(method)
358+ g(reference_to_no_delete_pointer(g)), _method(method),
359+ _user_sub_domain_marker(sub_domain)
360 {
361 check();
362 parameters = default_parameters();
363- init_from_mesh(sub_domain);
364 }
365 //-----------------------------------------------------------------------------
366 DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,
367@@ -126,11 +128,11 @@
368 uint sub_domain, std::string method)
369 : BoundaryCondition(V),
370 Hierarchical<DirichletBC>(*this),
371- g(g), _method(method)
372+ g(g), _method(method),
373+ _user_sub_domain_marker(sub_domain)
374 {
375 check();
376 parameters = default_parameters();
377- init_from_mesh(sub_domain);
378 }
379 //-----------------------------------------------------------------------------
380 DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,
381@@ -201,6 +203,43 @@
382 apply(&A, &b, &x);
383 }
384 //-----------------------------------------------------------------------------
385+void DirichletBC::gather(Map& boundary_values) const
386+{
387+ typedef std::vector<std::pair<uint, double> > bv_vec_type;
388+ typedef std::map<uint, bv_vec_type> map_type;
389+
390+ typedef boost::unordered_map<uint, std::vector<uint> > shared_dof_type;
391+ typedef shared_dof_type::const_iterator shared_dof_iterator;
392+ typedef std::vector<uint>::const_iterator proc_iterator;
393+
394+ dolfin_assert(_function_space->dofmap());
395+ const GenericDofMap& dofmap = *_function_space->dofmap();
396+ const shared_dof_type& shared_dofs = dofmap.shared_dofs();
397+
398+ // Create list of boundary values to send to each processor
399+
400+ map_type proc_map;
401+ for (Map::const_iterator bv = boundary_values.begin(); bv != boundary_values.end(); ++bv)
402+ {
403+ // If the boundary value is attached to a shared dof, add it to the list of
404+ // boundary values for each of the processors that share it
405+ shared_dof_iterator shared_dof = shared_dofs.find(bv->first);
406+ if (shared_dof != shared_dofs.end())
407+ for (proc_iterator proc = shared_dof->second.begin(); proc != shared_dof->second.end(); ++proc)
408+ proc_map[*proc].push_back(*bv);
409+ }
410+
411+ // Distribute the lists between neighbours
412+
413+ map_type received_bvs;
414+ MPI::distribute(dofmap.neighbours(), proc_map, received_bvs);
415+
416+ // Add the received boundary values to the local boundary values
417+
418+ for (map_type::const_iterator it = received_bvs.begin(); it != received_bvs.end(); ++it)
419+ boundary_values.insert(it->second.begin(), it->second.end());
420+}
421+//-----------------------------------------------------------------------------
422 void DirichletBC::get_boundary_values(Map& boundary_values,
423 std::string method) const
424 {
425@@ -564,7 +603,20 @@
426 }
427 }
428 //-----------------------------------------------------------------------------
429-void DirichletBC::init_from_sub_domain(boost::shared_ptr<const SubDomain> sub_domain)
430+void DirichletBC::init_facets() const
431+{
432+ if (facets.size() > 0)
433+ return;
434+
435+ if (_user_sub_domain)
436+ init_from_sub_domain(_user_sub_domain);
437+ else if (_user_mesh_function)
438+ init_from_mesh_function(*_user_mesh_function, _user_sub_domain_marker);
439+ else
440+ init_from_mesh(_user_sub_domain_marker);
441+}
442+//-----------------------------------------------------------------------------
443+void DirichletBC::init_from_sub_domain(boost::shared_ptr<const SubDomain> sub_domain) const
444 {
445 dolfin_assert(facets.size() == 0);
446
447@@ -595,7 +647,7 @@
448 }
449 //-----------------------------------------------------------------------------
450 void DirichletBC::init_from_mesh_function(const MeshFunction<uint>& sub_domains,
451- uint sub_domain)
452+ uint sub_domain) const
453 {
454 dolfin_assert(facets.size() == 0);
455
456@@ -624,7 +676,7 @@
457 }
458 }
459 //-----------------------------------------------------------------------------
460-void DirichletBC::init_from_mesh(uint sub_domain)
461+void DirichletBC::init_from_mesh(uint sub_domain) const
462 {
463 dolfin_assert(facets.size() == 0);
464
465@@ -678,6 +730,8 @@
466 dolfin_assert(_function_space);
467 dolfin_assert(g);
468
469+ init_facets();
470+
471 // Special case
472 if (facets.size() == 0)
473 {
474@@ -724,18 +778,19 @@
475 const double value = data.w[data.facet_dofs[i]];
476 boundary_values[global_dof] = value;
477 }
478-
479 p++;
480 }
481 }
482 //-----------------------------------------------------------------------------
483 void DirichletBC::compute_bc_geometric(Map& boundary_values,
484- BoundaryCondition::LocalData& data) const
485+ BoundaryCondition::LocalData& data) const
486 {
487 dolfin_assert(_function_space);
488 dolfin_assert(_function_space->element());
489 dolfin_assert(g);
490
491+ init_facets();
492+
493 // Special case
494 if (facets.size() == 0)
495 {
496@@ -814,7 +869,11 @@
497 dolfin_assert(_function_space);
498 dolfin_assert(_function_space->element());
499 dolfin_assert(g);
500- dolfin_assert(_user_sub_domain);
501+
502+ if (!_user_sub_domain)
503+ dolfin_error("DirichletBC.cpp",
504+ "computing Dirichlet boundary values, pointwise search",
505+ "A SubDomain is required for pointwise search");
506
507 // Get mesh and dofmap
508 dolfin_assert(_function_space->mesh());
509@@ -828,9 +887,13 @@
510 // Create UFC cell object
511 UFCCell ufc_cell(mesh);
512
513+ // Speeder-upper
514+ std::pair<uint,uint> local_range = dofmap.ownership_range();
515+ std::vector<bool> already_visited(local_range.second - local_range.first);
516+ std::fill(already_visited.begin(), already_visited.end(), false);
517+
518 // Iterate over cells
519 Progress p("Computing Dirichlet boundary values, pointwise search", mesh.num_cells());
520- Array<double> x(gdim);
521 for (CellIterator cell(mesh); !cell.end(); ++cell)
522 {
523 // Update UFC cell
524@@ -848,9 +911,17 @@
525 // Loop all dofs on cell
526 for (uint i = 0; i < dofmap.cell_dimension(cell->index()); ++i)
527 {
528- // Check if the coordinates are part of the sub domain (calls user-defined 'indside' function)
529- for (uint j = 0; j < gdim; ++j)
530- x[j] = data.coordinates[i][j];
531+ const uint global_dof = cell_dofs[i];
532+ if (global_dof >= local_range.first && global_dof < local_range.second)
533+ {
534+ const uint dof_index = global_dof - local_range.first;
535+ if (already_visited[dof_index])
536+ continue;
537+ already_visited[dof_index] = true;
538+ }
539+
540+ // Check if the coordinates are part of the sub domain (calls user-defined 'inside' function)
541+ Array<double> x(gdim, &data.coordinates[i][0]);
542 if (!_user_sub_domain->inside(x, false))
543 continue;
544
545@@ -863,7 +934,6 @@
546 }
547
548 // Set boundary value
549- const uint global_dof = cell_dofs[i];
550 const double value = data.w[i];
551 boundary_values[global_dof] = value;
552 }
553
554=== modified file 'dolfin/fem/DirichletBC.h'
555--- dolfin/fem/DirichletBC.h 2011-10-26 08:59:01 +0000
556+++ dolfin/fem/DirichletBC.h 2012-02-29 14:31:44 +0000
557@@ -17,9 +17,10 @@
558 //
559 // Modified by Kristian Oelgaard, 2007
560 // Modified by Johan Hake, 2009
561+// Modified by Joachim B Haga, 2012
562 //
563 // First added: 2007-04-10
564-// Last changed: 2011-04-13
565+// Last changed: 2012-02-29
566 //
567 // FIXME: This class needs some cleanup, in particular collecting
568 // FIXME: all data from different representations into a common
569@@ -94,8 +95,6 @@
570 /// three possibilties are "topological", "geometric" and
571 /// "pointwise".
572
573- /// This class specifies the interface for setting (strong)
574-
575 class DirichletBC : public BoundaryCondition, public Hierarchical<DirichletBC>
576 {
577
578@@ -301,6 +300,16 @@
579 void get_boundary_values(Map& boundary_values,
580 std::string method="default") const;
581
582+
583+ /// Get boundary values from neighbour processes. If a method other than
584+ /// "pointwise" is used, this is necessary to ensure all boundary dofs are
585+ /// marked on all processes.
586+ ///
587+ /// *Arguments*
588+ /// boundary_values (boost::unordered_map<uint, double>)
589+ /// Map from dof to boundary value.
590+ void gather(Map& boundary_values) const;
591+
592 /// Make rows of matrix associated with boundary condition zero,
593 /// useful for non-diagonal matrices in a block matrix.
594 ///
595@@ -401,15 +410,18 @@
596 // Check input data to constructor
597 void check() const;
598
599+ // Initialize facets (from sub domain, mesh, etc)
600+ void init_facets() const;
601+
602 // Initialize sub domain markers from sub domain
603- void init_from_sub_domain(boost::shared_ptr<const SubDomain> sub_domain);
604+ void init_from_sub_domain(boost::shared_ptr<const SubDomain> sub_domain) const;
605
606 // Initialize sub domain markers from MeshFunction
607 void init_from_mesh_function(const MeshFunction<uint>& sub_domains,
608- uint sub_domain);
609+ uint sub_domain) const;
610
611 // Initialize sub domain markers from mesh
612- void init_from_mesh(uint sub_domain);
613+ void init_from_mesh(uint sub_domain) const;
614
615 // Compute dofs and values for application of boundary conditions using
616 // given method
617@@ -444,8 +456,13 @@
618 boost::shared_ptr<const SubDomain> _user_sub_domain;
619
620 // Boundary facets, stored as pairs (cell, local facet number)
621- std::vector<std::pair<uint, uint> > facets;
622-
623+ mutable std::vector<std::pair<uint, uint> > facets;
624+
625+ // User defined mesh function
626+ boost::shared_ptr<const MeshFunction<uint> > _user_mesh_function;
627+
628+ // User defined sub domain marker for mesh or mesh function
629+ uint _user_sub_domain_marker;
630 };
631
632 }
633
634=== modified file 'dolfin/fem/DofMap.cpp'
635--- dolfin/fem/DofMap.cpp 2011-11-18 12:14:50 +0000
636+++ dolfin/fem/DofMap.cpp 2012-02-29 14:31:44 +0000
637@@ -19,9 +19,10 @@
638 // Modified by Kent-Andre Mardal, 2009
639 // Modified by Ola Skavhaug, 2009
640 // Modified by Niclas Jansson, 2009
641+// Modified by Joachim B Haga, 2012
642 //
643 // First added: 2007-03-01
644-// Last changed: 2011-10-31
645+// Last changed: 2012-02-29
646
647 #include <dolfin/common/MPI.h>
648 #include <dolfin/common/NoDeleter.h>
649@@ -173,6 +174,8 @@
650 // Modify dofmap for non-UFC numbering
651 ufc_map_to_dofmap.clear();
652 _off_process_owner.clear();
653+ _shared_dofs.clear();
654+ _neighbours.clear();
655 if (parent_dofmap.ufc_map_to_dofmap.size() > 0)
656 {
657 boost::unordered_map<uint, uint>::const_iterator ufc_to_current_dof;
658@@ -196,6 +199,14 @@
659 boost::unordered_map<uint, uint>::const_iterator parent_off_proc = parent_dofmap._off_process_owner.find(*dof);
660 if (parent_off_proc != parent_dofmap._off_process_owner.end())
661 _off_process_owner.insert(*parent_off_proc);
662+
663+ // Add to shared-dof process map, and update the set of neighbours
664+ boost::unordered_map<uint, std::vector<uint> >::const_iterator parent_shared = parent_dofmap._shared_dofs.find(*dof);
665+ if (parent_shared != parent_dofmap._shared_dofs.end())
666+ {
667+ _shared_dofs.insert(*parent_shared);
668+ _neighbours.insert(parent_shared->second.begin(), parent_shared->second.end());
669+ }
670 }
671 }
672 }
673@@ -309,6 +320,16 @@
674 return _off_process_owner;
675 }
676 //-----------------------------------------------------------------------------
677+const boost::unordered_map<uint, std::vector<uint> >& DofMap::shared_dofs() const
678+{
679+ return _shared_dofs;
680+}
681+//-----------------------------------------------------------------------------
682+const std::set<uint>& DofMap::neighbours() const
683+{
684+ return _neighbours;
685+}
686+//-----------------------------------------------------------------------------
687 void DofMap::tabulate_facet_dofs(uint* dofs, uint local_facet) const
688 {
689 dolfin_assert(_ufc_dofmap);
690
691=== modified file 'dolfin/fem/DofMap.h'
692--- dolfin/fem/DofMap.h 2011-11-18 12:14:50 +0000
693+++ dolfin/fem/DofMap.h 2012-02-29 14:31:44 +0000
694@@ -18,9 +18,10 @@
695 // Modified by Martin Alnes, 2008
696 // Modified by Kent-Andre Mardal, 2009
697 // Modified by Ola Skavhaug, 2009
698+// Modified by Joachim B Haga, 2012
699 //
700 // First added: 2007-03-01
701-// Last changed: 2011-10-31
702+// Last changed: 2012-02-29
703
704 #ifndef __DOLFIN_DOF_MAP_H
705 #define __DOLFIN_DOF_MAP_H
706@@ -178,6 +179,21 @@
707 /// The map from non-local dofs.
708 const boost::unordered_map<unsigned int, unsigned int>& off_process_owner() const;
709
710+ /// Return map from all shared dofs to the processes (not including the current
711+ /// process) that share it.
712+ ///
713+ /// *Returns*
714+ /// boost::unordered_map<uint, std::vector<uint> >
715+ /// The map from dofs to list of processes
716+ const boost::unordered_map<uint, std::vector<uint> >& shared_dofs() const;
717+
718+ /// Return set of all neighbouring processes.
719+ ///
720+ /// *Returns*
721+ /// boost::unordered_set<uint>
722+ /// The set of processes
723+ const std::set<uint>& neighbours() const;
724+
725 /// Local-to-global mapping of dofs on a cell
726 ///
727 /// *Arguments*
728@@ -346,6 +362,12 @@
729 // this process
730 boost::unordered_map<uint, uint> _off_process_owner;
731
732+ // List of processes that share a given dof
733+ boost::unordered_map<uint, std::vector<uint> > _shared_dofs;
734+
735+ // Neighbours (processes that we share dofs with)
736+ std::set<uint> _neighbours;
737+
738 // True iff sub-dofmap (a view, i.e. not collapsed)
739 bool _is_view;
740
741
742=== modified file 'dolfin/fem/DofMapBuilder.cpp'
743--- dolfin/fem/DofMapBuilder.cpp 2011-11-18 12:14:50 +0000
744+++ dolfin/fem/DofMapBuilder.cpp 2012-02-29 14:31:44 +0000
745@@ -17,9 +17,10 @@
746 //
747 // Modified by Niclas Jansson 2009.
748 // Modified by Garth N. Wells 2010.
749+// Modified by Joachim B Haga, 2012.
750 //
751 // First added: 2008-08-12
752-// Last changed: 2011-11-14
753+// Last changed: 2012-02-29
754
755 #include <ufc.h>
756 #include <boost/random.hpp>
757@@ -87,18 +88,20 @@
758 {
759 // Create data structures
760 DofMapBuilder::set owned_dofs, shared_owned_dofs, shared_unowned_dofs;
761+ DofMapBuilder::vec_map shared_dof_processes;
762
763 // Computed owned and shared dofs (and owned and un-owned)
764 compute_ownership(owned_dofs, shared_owned_dofs, shared_unowned_dofs,
765- dofmap, global_dofs, mesh);
766+ shared_dof_processes, dofmap, global_dofs, mesh);
767
768 // Renumber dofs owned dofs and received new numbering for unowned shared dofs
769 parallel_renumber(owned_dofs, shared_owned_dofs, shared_unowned_dofs,
770- dofmap, mesh);
771+ shared_dof_processes, dofmap, mesh);
772 }
773 //-----------------------------------------------------------------------------
774 void DofMapBuilder::compute_ownership(set& owned_dofs, set& shared_owned_dofs,
775 set& shared_unowned_dofs,
776+ vec_map& shared_dof_processes,
777 const DofMap& dofmap,
778 const DofMapBuilder::set& global_dofs,
779 const Mesh& mesh)
780@@ -174,6 +177,7 @@
781 {
782 const uint received_dof = recv_buffer[i];
783 const uint received_vote = recv_buffer[i + 1];
784+
785 if (shared_owned_dofs.find(received_dof) != shared_owned_dofs.end())
786 {
787 // Move dofs with higher ownership votes from shared to shared but not owned
788@@ -191,6 +195,14 @@
789 "compute mapping of degrees of freedom",
790 "Cannot decide on dof ownership; votes are equal");
791 }
792+
793+ // Remember the sharing of the dof
794+ shared_dof_processes[received_dof].push_back(src);
795+ }
796+ else if (shared_unowned_dofs.find(received_dof) != shared_unowned_dofs.end())
797+ {
798+ // Remember the sharing of the dof
799+ shared_dof_processes[received_dof].push_back(src);
800 }
801 }
802 }
803@@ -240,6 +252,7 @@
804 void DofMapBuilder::parallel_renumber(const set& owned_dofs,
805 const set& shared_owned_dofs,
806 const set& shared_unowned_dofs,
807+ const vec_map& shared_dof_processes,
808 DofMap& dofmap, const Mesh& mesh)
809 {
810 log(TRACE, "Renumber dofs for parallel dof map");
811@@ -317,6 +330,17 @@
812 }
813 }
814
815+ // Insert the shared-dof-to-process mapping into the dofmap, renumbering as necessary
816+ for (vec_map::const_iterator it = shared_dof_processes.begin(); it != shared_dof_processes.end(); ++it)
817+ {
818+ boost::unordered_map<uint, uint>::const_iterator new_index = old_to_new_dof_index.find(it->first);
819+ if (new_index == old_to_new_dof_index.end())
820+ dofmap._shared_dofs.insert(*it);
821+ else
822+ dofmap._shared_dofs.insert(std::make_pair(new_index->second, it->second));
823+ dofmap._neighbours.insert(it->second.begin(), it->second.end());
824+ }
825+
826 // Build new dof map
827 for (CellIterator cell(mesh); !cell.end(); ++cell)
828 {
829
830=== modified file 'dolfin/fem/DofMapBuilder.h'
831--- dolfin/fem/DofMapBuilder.h 2011-11-24 17:51:44 +0000
832+++ dolfin/fem/DofMapBuilder.h 2012-02-29 14:31:44 +0000
833@@ -25,6 +25,7 @@
834
835 #include <set>
836 #include <boost/unordered_set.hpp>
837+#include <boost/unordered_map.hpp>
838 #include <dolfin/common/Set.h>
839
840 namespace ufc
841@@ -55,6 +56,8 @@
842
843 typedef std::vector<dolfin::uint>::const_iterator vector_it;
844
845+ typedef boost::unordered_map<uint, std::vector<uint> > vec_map;
846+
847 public:
848
849 static void build(DofMap& dofmap, const Mesh& dolfin_mesh,
850@@ -70,6 +73,7 @@
851
852 static void compute_ownership(set& owned_dofs, set& shared_owned_dofs,
853 set& shared_unowned_dofs,
854+ vec_map& shared_dof_processes,
855 const DofMap& dofmap,
856 const DofMapBuilder::set& global_dofs,
857 const Mesh& mesh);
858@@ -77,6 +81,7 @@
859 static void parallel_renumber(const set& owned_dofs,
860 const set& shared_owned_dofs,
861 const set& shared_unowned_dofs,
862+ const vec_map& shared_dof_processes,
863 DofMap& dofmap, const Mesh& mesh);
864
865 /// Compute set of global dofs (e.g. Reals associated with global
866
867=== modified file 'dolfin/fem/GenericDofMap.h'
868--- dolfin/fem/GenericDofMap.h 2011-09-27 07:14:59 +0000
869+++ dolfin/fem/GenericDofMap.h 2012-02-29 14:31:44 +0000
870@@ -15,8 +15,10 @@
871 // You should have received a copy of the GNU Lesser General Public License
872 // along with DOLFIN. If not, see <http://www.gnu.org/licenses/>.
873 //
874+// Modified by Joachim B Haga, 2012
875+//
876 // First added: 2010-05-26
877-// Last changed: 2011-02-23
878+// Last changed: 2012-02-29
879
880 #ifndef __GENERIC_DOF_MAP_H
881 #define __GENERIC_DOF_MAP_H
882@@ -108,6 +110,13 @@
883 /// Return the set of dof indices
884 virtual boost::unordered_set<uint> dofs() const = 0;
885
886+ /// Return map from shared dofs to the processes (not including the current
887+ /// process) that share it.
888+ virtual const boost::unordered_map<uint, std::vector<uint> >& shared_dofs() const = 0;
889+
890+ /// Return set of all processes that share dofs with the current process.
891+ virtual const std::set<uint>& neighbours() const = 0;
892+
893 /// Re-number based on provided re-numbering map
894 virtual void renumber(const std::vector<uint>& renumbering_map) = 0;
895
896
897=== modified file 'dolfin/fem/SystemAssembler.cpp'
898--- dolfin/fem/SystemAssembler.cpp 2012-02-22 09:40:00 +0000
899+++ dolfin/fem/SystemAssembler.cpp 2012-02-29 14:31:44 +0000
900@@ -16,9 +16,10 @@
901 // along with DOLFIN. If not, see <http://www.gnu.org/licenses/>.
902 //
903 // Modified by Anders Logg 2008-2011
904+// Modified by Joachim B Haga 2012
905 //
906 // First added: 2009-06-22
907-// Last changed: 2011-11-14
908+// Last changed: 2012-02-29
909
910 #include <dolfin/log/dolfin_log.h>
911 #include <dolfin/common/Timer.h>
912@@ -168,20 +169,9 @@
913 DirichletBC::Map boundary_values;
914 for (uint i = 0; i < bcs.size(); ++i)
915 {
916- // Methods other than 'pointwise' are not robust in parallel since a vertex
917- // can have a bc applied, but the partition might not have a facet on the boundary.
918+ bcs[i]->get_boundary_values(boundary_values);
919 if (MPI::num_processes() > 1 && bcs[i]->method() != "pointwise")
920- {
921- if (MPI::process_number() == 0)
922- {
923- warning("Dirichlet boundary condition method '%s' is not robust in parallel with symmetric assembly.", bcs[i]->method().c_str());
924- //warning("Caution: 'on_boundary' does not work with 'pointwise' boundary conditions,");
925- }
926- bcs[i]->get_boundary_values(boundary_values);
927- //bcs[i]->get_boundary_values(boundary_values, "pointwise");
928- }
929- else
930- bcs[i]->get_boundary_values(boundary_values);
931+ bcs[i]->gather(boundary_values);
932 }
933
934 // Modify boundary values for incremental (typically nonlinear) problems

Subscribers

People subscribed via source and target branches