Merge lp:~jobh/dolfin/parallel-dirichlet into lp:~fenics-core/dolfin/trunk

Proposed by Joachim Haga
Status: Merged
Merge reported by: Garth Wells
Merged at revision: not available
Proposed branch: lp:~jobh/dolfin/parallel-dirichlet
Merge into: lp:~fenics-core/dolfin/trunk
Diff against target: 934 lines (+352/-67)
10 files modified
dolfin/common/MPI.cpp (+30/-1)
dolfin/common/MPI.h (+114/-16)
dolfin/fem/DirichletBC.cpp (+92/-22)
dolfin/fem/DirichletBC.h (+25/-8)
dolfin/fem/DofMap.cpp (+22/-1)
dolfin/fem/DofMap.h (+23/-1)
dolfin/fem/DofMapBuilder.cpp (+27/-3)
dolfin/fem/DofMapBuilder.h (+5/-0)
dolfin/fem/GenericDofMap.h (+10/-1)
dolfin/fem/SystemAssembler.cpp (+4/-14)
To merge this branch: bzr merge lp:~jobh/dolfin/parallel-dirichlet
Reviewer Review Type Date Requested Status
Garth Wells Approve
Review via email: mp+95179@code.launchpad.net

Description of the change

This branch does the following to Dirichlet boundary conditions:

(a) Make DirichletBC pointwise search faster (by not doing unnecessary initialisation of facets, and only visiting vertices once).

(b) Make DirichletBC topological and geometric search safe in parallel for SystemAssembler (by adding a gather() method which exchanges boundary values with neighbour processes).

==

(a) is a simple change, by just delaying initialisation of facets until they are required. The timings for Poisson on a 48^3 cube are:

         before [s] after [s]

Topological search, compiled boundary subdomain
          20.3 19.9

Topological search, python boundary subdomain
          29.2 29.5

Pointwise search, compiled boundary subdomain
          21.4 19.9

Pointwise search, python boundary subdomain
          46.4 20.5 (!)

The recorded times include mesh create, 2x assemble, 2x DirichletBC create, 2x DirichletBC apply. The times do not change very much except for the case of pointwise search with a Python subdomain, which is dramatically improved. Pointwise search is now (much) faster than topological search when using Python subdomains.

(b) is more involved, and includes adding neighbour / dof sharing information to DofMap, and MPI functionality for exchanging data with neighbours (a new distribute() call, and a new send_recv because the current one deadlocks when posting different number of send_recvs on different processes).

It passes the unit and system tests (except KrylovSolver, but that one fails for me in trunk too).

To post a comment you must log in.
Revision history for this message
Garth Wells (garth-wells) wrote :

This crashes the auto-adaptive-poisson demo, which is probably due to a bug in the adaptive solver (see below). I suspect that the solver fails to properly initialise the mesh, but the the initialisation was previously done by chance in the bc code.

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
*** https://answers.launchpad.net/dolfin
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error: Unable to initialize mapping of degrees of freedom.
*** Reason: Missing entities of dimension 1. Try calling mesh.init(1).
*** Where: This error was encountered inside DofMap.cpp.
*** -------------------------------------------------------------------------

review: Needs Fixing
Revision history for this message
Garth Wells (garth-wells) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'dolfin/common/MPI.cpp'
--- dolfin/common/MPI.cpp 2011-11-18 12:14:50 +0000
+++ dolfin/common/MPI.cpp 2012-02-29 14:31:44 +0000
@@ -19,9 +19,10 @@
19// Modified by Anders Logg 2007-201119// Modified by Anders Logg 2007-2011
20// Modified by Ola Skavhaug 2008-200920// Modified by Ola Skavhaug 2008-2009
21// Modified by Niclas Jansson 200921// Modified by Niclas Jansson 2009
22// Modified by Joachim B Haga 2012
22//23//
23// First added: 2007-11-3024// First added: 2007-11-30
24// Last changed: 2011-08-2525// Last changed: 2012-02-29
2526
26#include <numeric>27#include <numeric>
27#include <dolfin/log/dolfin_log.h>28#include <dolfin/log/dolfin_log.h>
@@ -145,6 +146,23 @@
145 return r + (index - r * (n + 1)) / n;146 return r + (index - r * (n + 1)) / n;
146}147}
147//-----------------------------------------------------------------------------148//-----------------------------------------------------------------------------
149dolfin::MPINonblocking::MPINonblocking()
150{}
151//-----------------------------------------------------------------------------
152dolfin::MPINonblocking::~MPINonblocking()
153{
154 wait_all();
155}
156//-----------------------------------------------------------------------------
157void dolfin::MPINonblocking::wait_all()
158{
159 if (!reqs.empty())
160 {
161 boost::mpi::wait_all(reqs.begin(), reqs.end());
162 reqs.clear();
163 }
164}
165//-----------------------------------------------------------------------------
148//-----------------------------------------------------------------------------166//-----------------------------------------------------------------------------
149#else167#else
150//-----------------------------------------------------------------------------168//-----------------------------------------------------------------------------
@@ -217,4 +235,15 @@
217 return 0;235 return 0;
218}236}
219//-----------------------------------------------------------------------------237//-----------------------------------------------------------------------------
238dolfin::MPINonblocking::Nonblocking()
239{}
240//-----------------------------------------------------------------------------
241dolfin::MPINonblocking::~Nonblocking()
242{}
243//-----------------------------------------------------------------------------
244void dolfin::MPINonblocking::wait_all()
245{
246 error_no_mpi("call MPINonblocking::wait_all");
247}
248
220#endif249#endif
221250
=== modified file 'dolfin/common/MPI.h'
--- dolfin/common/MPI.h 2011-12-04 11:33:42 +0000
+++ dolfin/common/MPI.h 2012-02-29 14:31:44 +0000
@@ -18,9 +18,10 @@
18// Modified by Ola Skavhaug 2008-200918// Modified by Ola Skavhaug 2008-2009
19// Modified by Anders Logg 2008-201119// Modified by Anders Logg 2008-2011
20// Modified by Niclas Jansson 200920// Modified by Niclas Jansson 2009
21// Modified by Joachim B Haga 2012
21//22//
22// First added: 2007-11-3023// First added: 2007-11-30
23// Last changed: 2011-09-2824// Last changed: 2012-02-29
2425
25#ifndef __MPI_DOLFIN_WRAPPER_H26#ifndef __MPI_DOLFIN_WRAPPER_H
26#define __MPI_DOLFIN_WRAPPER_H27#define __MPI_DOLFIN_WRAPPER_H
@@ -31,6 +32,7 @@
31#include <dolfin/log/dolfin_log.h>32#include <dolfin/log/dolfin_log.h>
3233
33#ifdef HAS_MPI34#ifdef HAS_MPI
35#include <boost/serialization/utility.hpp>
34#include <boost/mpi.hpp>36#include <boost/mpi.hpp>
35#include <mpi.h>37#include <mpi.h>
36#endif38#endif
@@ -58,6 +60,32 @@
58 };60 };
59 #endif61 #endif
6062
63 /// This class provides stateful (single communicator) non-blocking MPI functionality.
64
65 class MPINonblocking
66 {
67 public:
68 /// Create instance with associated MPI_WORLD communicator
69 MPINonblocking();
70
71 /// Destroy instance (waits for outstanding requests)
72 ~MPINonblocking();
73
74 /// Non-blocking send and receive
75 template<typename T>
76 void send_recv(const T& send_value, uint dest,
77 T& recv_value, uint source);
78
79 /// Wait for all requests to finish
80 void wait_all();
81
82 #ifdef HAS_MPI
83 private:
84 MPICommunicator mpi_comm;
85 std::vector<boost::mpi::request> reqs;
86 #endif
87 };
88
61 /// This class provides utility functions for easy communcation with MPI.89 /// This class provides utility functions for easy communcation with MPI.
6290
63 class MPI91 class MPI
@@ -100,13 +128,22 @@
100 distribute(in_values, destinations, out_values, sources);128 distribute(in_values, destinations, out_values, sources);
101 }129 }
102130
131 /// Distribute local arrays on a group of processes (typically neighbours
132 /// from GenericDofMap::neighbours()). It is important that each process'
133 /// group includes exactly the processes that has it in their groups,
134 /// otherwise it will deadlock.
135 template<typename T>
136 static void distribute(const std::set<uint> group,
137 const std::map<uint,T>& in_values_per_dest,
138 std::map<uint,T>& out_values_per_src);
139
103 // Broadcast value from broadcaster process to all processes140 // Broadcast value from broadcaster process to all processes
104 template<typename T>141 template<typename T>
105 static void broadcast(T& value, uint broadcaster=0)142 static void broadcast(T& value, uint broadcaster=0)
106 {143 {
107 #ifdef HAS_MPI144 #ifdef HAS_MPI
108 MPICommunicator mpi_comm;145 MPICommunicator mpi_comm;
109 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);146 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
110 boost::mpi::broadcast(comm, value, broadcaster);147 boost::mpi::broadcast(comm, value, broadcaster);
111 #endif148 #endif
112 }149 }
@@ -118,7 +155,7 @@
118 {155 {
119 #ifdef HAS_MPI156 #ifdef HAS_MPI
120 MPICommunicator mpi_comm;157 MPICommunicator mpi_comm;
121 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);158 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
122 boost::mpi::scatter(comm, in_values, out_value, sending_process);159 boost::mpi::scatter(comm, in_values, out_value, sending_process);
123 #else160 #else
124 dolfin_assert(sending_process == 0);161 dolfin_assert(sending_process == 0);
@@ -134,7 +171,7 @@
134 {171 {
135 #ifdef HAS_MPI172 #ifdef HAS_MPI
136 MPICommunicator mpi_comm;173 MPICommunicator mpi_comm;
137 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);174 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
138 boost::mpi::gather(comm, in_value, out_values, receiving_process);175 boost::mpi::gather(comm, in_value, out_values, receiving_process);
139 #else176 #else
140 out_values.clear();177 out_values.clear();
@@ -148,7 +185,7 @@
148 {185 {
149 #ifdef HAS_MPI186 #ifdef HAS_MPI
150 MPICommunicator mpi_comm;187 MPICommunicator mpi_comm;
151 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);188 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
152 boost::mpi::all_gather(comm, in_value, out_values);189 boost::mpi::all_gather(comm, in_value, out_values);
153 #else190 #else
154 out_values.clear();191 out_values.clear();
@@ -191,7 +228,7 @@
191 {228 {
192 #ifdef HAS_MPI229 #ifdef HAS_MPI
193 MPICommunicator mpi_comm;230 MPICommunicator mpi_comm;
194 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);231 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
195 T out;232 T out;
196 boost::mpi::all_reduce(comm, value, out, op);233 boost::mpi::all_reduce(comm, value, out, op);
197 return out;234 return out;
@@ -207,20 +244,17 @@
207 /// reduction op)244 /// reduction op)
208 static uint global_offset(uint range, bool exclusive);245 static uint global_offset(uint range, bool exclusive);
209246
210 /// Send-receive and data247 /// Send-receive data. Note that if the number of posted send-receives may
248 /// differ between processes, another interface (such as MPINonblocking::send_recv)
249 /// must be used since duplicating the communicator requires participation
250 /// from all processes.
211 template<typename T>251 template<typename T>
212 static void send_recv(const T& send_value, uint dest,252 static void send_recv(const T& send_value, uint dest,
213 T& recv_value, uint source)253 T& recv_value, uint source)
214 {254 {
215 #ifdef HAS_MPI255 #ifdef HAS_MPI
216 MPICommunicator mpi_comm;256 MPINonblocking mpi;
217 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_duplicate);257 mpi.send_recv(send_value, dest, recv_value, source);
218
219 // Non-blocking send-receive
220 boost::mpi::request reqs[2];
221 reqs[0] = comm.isend(dest, 0, send_value);
222 reqs[1] = comm.irecv(source, 0, recv_value);
223 boost::mpi::wait_all(reqs, reqs + 2);
224 #else258 #else
225 dolfin_error("MPI.h",259 dolfin_error("MPI.h",
226 "call MPI::send_recv",260 "call MPI::send_recv",
@@ -246,6 +280,13 @@
246280
247 private:281 private:
248282
283 #ifndef HAS_MPI
284 static void error_no_mpi(const char *where)
285 {
286 dolfin_error("MPI.h", msg, "DOLFIN has been configured without MPI support");
287 }
288 #endif
289
249 #ifdef HAS_MPI290 #ifdef HAS_MPI
250 // Return MPI data type291 // Return MPI data type
251 template<typename T> static MPI_Datatype mpi_type()292 template<typename T> static MPI_Datatype mpi_type()
@@ -259,6 +300,7 @@
259300
260 };301 };
261302
303
262 #ifdef HAS_MPI304 #ifdef HAS_MPI
263 // Specialisations for MPI_Datatypes305 // Specialisations for MPI_Datatypes
264 template<> inline MPI_Datatype MPI::mpi_type<double>() { return MPI_DOUBLE; }306 template<> inline MPI_Datatype MPI::mpi_type<double>() { return MPI_DOUBLE; }
@@ -337,7 +379,63 @@
337 "DOLFIN has been configured without MPI support");379 "DOLFIN has been configured without MPI support");
338 }380 }
339 #endif381 #endif
340 //---------------------------------------------------------------------------382
383 //-----------------------------------------------------------------------------
384 template<typename T>
385 void dolfin::MPI::distribute(const std::set<uint> group,
386 const std::map<uint,T>& in_values_per_dest,
387 std::map<uint,T>& out_values_per_src)
388 {
389 #ifdef HAS_MPI
390 typedef typename std::map<uint,T>::const_iterator map_const_iterator;
391 typedef typename std::map<uint,T>::iterator map_iterator;
392 dolfin::MPINonblocking mpi;
393 const T no_data;
394
395 // Send and receive values to all processes in groups (non-blocking). If a
396 // given process is not found in in_values_per_dest, send empty data.
397
398 out_values_per_src.clear();
399 for (std::set<uint>::const_iterator dest = group.begin(); dest != group.end(); ++dest)
400 {
401 map_const_iterator values = in_values_per_dest.find(*dest);
402 if (values != in_values_per_dest.end())
403 mpi.send_recv(values->second, *dest, out_values_per_src[*dest], *dest);
404 else
405 mpi.send_recv(no_data, *dest, out_values_per_src[*dest], *dest);
406 }
407
408 // Wait for all MPI calls before modifying out_values_per_src
409
410 mpi.wait_all();
411
412 // Remove received no_data entries.
413
414 map_iterator it = out_values_per_src.begin();
415 while (it != out_values_per_src.end())
416 {
417 map_iterator tmp = it++;
418 if (tmp->second.empty())
419 out_values_per_src.erase(tmp); // map::erase only invalidates current iterator
420 }
421 #else
422 error_no_mpi("call MPI::distribute");
423 #endif
424 }
425
426 //-----------------------------------------------------------------------------
427 template<typename T>
428 void dolfin::MPINonblocking::send_recv(const T& send_value, uint dest,
429 T& recv_value, uint source)
430 {
431 #ifdef HAS_MPI
432 boost::mpi::communicator comm(*mpi_comm, boost::mpi::comm_attach);
433 reqs.push_back(comm.isend(dest, 0, send_value));
434 reqs.push_back(comm.irecv(source, 0, recv_value));
435 #else
436 error_no_mpi("call MPINonblocking::send_recv");
437 #endif
438 }
341439
342}440}
343441
344442
=== modified file 'dolfin/fem/DirichletBC.cpp'
--- dolfin/fem/DirichletBC.cpp 2011-12-15 00:03:23 +0000
+++ dolfin/fem/DirichletBC.cpp 2012-02-29 14:31:44 +0000
@@ -18,13 +18,15 @@
18// Modified by Kristian Oelgaard, 200818// Modified by Kristian Oelgaard, 2008
19// Modified by Martin Sandve Alnes, 200819// Modified by Martin Sandve Alnes, 2008
20// Modified by Johan Hake, 200920// Modified by Johan Hake, 2009
21// Modified by Joachim B Haga, 2009
21//22//
22// First added: 2007-04-1023// First added: 2007-04-10
23// Last changed: 2011-09-1924// Last changed: 2012-02-29
2425
25#include <map>26#include <map>
26#include <utility>27#include <utility>
27#include <boost/assign/list_of.hpp>28#include <boost/assign/list_of.hpp>
29#include <boost/serialization/utility.hpp>
2830
29#include <dolfin/common/constants.h>31#include <dolfin/common/constants.h>
30#include <dolfin/common/Array.h>32#include <dolfin/common/Array.h>
@@ -67,7 +69,6 @@
67{69{
68 check();70 check();
69 parameters = default_parameters();71 parameters = default_parameters();
70 init_from_sub_domain(_user_sub_domain);
71}72}
72//-----------------------------------------------------------------------------73//-----------------------------------------------------------------------------
73DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,74DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,
@@ -80,7 +81,6 @@
80{81{
81 check();82 check();
82 parameters = default_parameters();83 parameters = default_parameters();
83 init_from_sub_domain(_user_sub_domain);
84}84}
85//-----------------------------------------------------------------------------85//-----------------------------------------------------------------------------
86DirichletBC::DirichletBC(const FunctionSpace& V, const GenericFunction& g,86DirichletBC::DirichletBC(const FunctionSpace& V, const GenericFunction& g,
@@ -89,11 +89,12 @@
89 : BoundaryCondition(V),89 : BoundaryCondition(V),
90 Hierarchical<DirichletBC>(*this),90 Hierarchical<DirichletBC>(*this),
91 g(reference_to_no_delete_pointer(g)),91 g(reference_to_no_delete_pointer(g)),
92 _method(method)92 _method(method),
93 _user_mesh_function(reference_to_no_delete_pointer(sub_domains)),
94 _user_sub_domain_marker(sub_domain)
93{95{
94 check();96 check();
95 parameters = default_parameters();97 parameters = default_parameters();
96 init_from_mesh_function(sub_domains, sub_domain);
97}98}
98//-----------------------------------------------------------------------------99//-----------------------------------------------------------------------------
99DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,100DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,
@@ -103,22 +104,23 @@
103 std::string method)104 std::string method)
104 : BoundaryCondition(V),105 : BoundaryCondition(V),
105 Hierarchical<DirichletBC>(*this),106 Hierarchical<DirichletBC>(*this),
106 g(g), _method(method)107 g(g), _method(method),
108 _user_mesh_function(sub_domains),
109 _user_sub_domain_marker(sub_domain)
107{110{
108 check();111 check();
109 parameters = default_parameters();112 parameters = default_parameters();
110 init_from_mesh_function(*sub_domains, sub_domain);
111}113}
112//-----------------------------------------------------------------------------114//-----------------------------------------------------------------------------
113DirichletBC::DirichletBC(const FunctionSpace& V, const GenericFunction& g,115DirichletBC::DirichletBC(const FunctionSpace& V, const GenericFunction& g,
114 uint sub_domain, std::string method)116 uint sub_domain, std::string method)
115 : BoundaryCondition(V),117 : BoundaryCondition(V),
116 Hierarchical<DirichletBC>(*this),118 Hierarchical<DirichletBC>(*this),
117 g(reference_to_no_delete_pointer(g)), _method(method)119 g(reference_to_no_delete_pointer(g)), _method(method),
120 _user_sub_domain_marker(sub_domain)
118{121{
119 check();122 check();
120 parameters = default_parameters();123 parameters = default_parameters();
121 init_from_mesh(sub_domain);
122}124}
123//-----------------------------------------------------------------------------125//-----------------------------------------------------------------------------
124DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,126DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,
@@ -126,11 +128,11 @@
126 uint sub_domain, std::string method)128 uint sub_domain, std::string method)
127 : BoundaryCondition(V),129 : BoundaryCondition(V),
128 Hierarchical<DirichletBC>(*this),130 Hierarchical<DirichletBC>(*this),
129 g(g), _method(method)131 g(g), _method(method),
132 _user_sub_domain_marker(sub_domain)
130{133{
131 check();134 check();
132 parameters = default_parameters();135 parameters = default_parameters();
133 init_from_mesh(sub_domain);
134}136}
135//-----------------------------------------------------------------------------137//-----------------------------------------------------------------------------
136DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,138DirichletBC::DirichletBC(boost::shared_ptr<const FunctionSpace> V,
@@ -201,6 +203,43 @@
201 apply(&A, &b, &x);203 apply(&A, &b, &x);
202}204}
203//-----------------------------------------------------------------------------205//-----------------------------------------------------------------------------
206void DirichletBC::gather(Map& boundary_values) const
207{
208 typedef std::vector<std::pair<uint, double> > bv_vec_type;
209 typedef std::map<uint, bv_vec_type> map_type;
210
211 typedef boost::unordered_map<uint, std::vector<uint> > shared_dof_type;
212 typedef shared_dof_type::const_iterator shared_dof_iterator;
213 typedef std::vector<uint>::const_iterator proc_iterator;
214
215 dolfin_assert(_function_space->dofmap());
216 const GenericDofMap& dofmap = *_function_space->dofmap();
217 const shared_dof_type& shared_dofs = dofmap.shared_dofs();
218
219 // Create list of boundary values to send to each processor
220
221 map_type proc_map;
222 for (Map::const_iterator bv = boundary_values.begin(); bv != boundary_values.end(); ++bv)
223 {
224 // If the boundary value is attached to a shared dof, add it to the list of
225 // boundary values for each of the processors that share it
226 shared_dof_iterator shared_dof = shared_dofs.find(bv->first);
227 if (shared_dof != shared_dofs.end())
228 for (proc_iterator proc = shared_dof->second.begin(); proc != shared_dof->second.end(); ++proc)
229 proc_map[*proc].push_back(*bv);
230 }
231
232 // Distribute the lists between neighbours
233
234 map_type received_bvs;
235 MPI::distribute(dofmap.neighbours(), proc_map, received_bvs);
236
237 // Add the received boundary values to the local boundary values
238
239 for (map_type::const_iterator it = received_bvs.begin(); it != received_bvs.end(); ++it)
240 boundary_values.insert(it->second.begin(), it->second.end());
241}
242//-----------------------------------------------------------------------------
204void DirichletBC::get_boundary_values(Map& boundary_values,243void DirichletBC::get_boundary_values(Map& boundary_values,
205 std::string method) const244 std::string method) const
206{245{
@@ -564,7 +603,20 @@
564 }603 }
565}604}
566//-----------------------------------------------------------------------------605//-----------------------------------------------------------------------------
567void DirichletBC::init_from_sub_domain(boost::shared_ptr<const SubDomain> sub_domain)606void DirichletBC::init_facets() const
607{
608 if (facets.size() > 0)
609 return;
610
611 if (_user_sub_domain)
612 init_from_sub_domain(_user_sub_domain);
613 else if (_user_mesh_function)
614 init_from_mesh_function(*_user_mesh_function, _user_sub_domain_marker);
615 else
616 init_from_mesh(_user_sub_domain_marker);
617}
618//-----------------------------------------------------------------------------
619void DirichletBC::init_from_sub_domain(boost::shared_ptr<const SubDomain> sub_domain) const
568{620{
569 dolfin_assert(facets.size() == 0);621 dolfin_assert(facets.size() == 0);
570622
@@ -595,7 +647,7 @@
595}647}
596//-----------------------------------------------------------------------------648//-----------------------------------------------------------------------------
597void DirichletBC::init_from_mesh_function(const MeshFunction<uint>& sub_domains,649void DirichletBC::init_from_mesh_function(const MeshFunction<uint>& sub_domains,
598 uint sub_domain)650 uint sub_domain) const
599{651{
600 dolfin_assert(facets.size() == 0);652 dolfin_assert(facets.size() == 0);
601653
@@ -624,7 +676,7 @@
624 }676 }
625}677}
626//-----------------------------------------------------------------------------678//-----------------------------------------------------------------------------
627void DirichletBC::init_from_mesh(uint sub_domain)679void DirichletBC::init_from_mesh(uint sub_domain) const
628{680{
629 dolfin_assert(facets.size() == 0);681 dolfin_assert(facets.size() == 0);
630682
@@ -678,6 +730,8 @@
678 dolfin_assert(_function_space);730 dolfin_assert(_function_space);
679 dolfin_assert(g);731 dolfin_assert(g);
680732
733 init_facets();
734
681 // Special case735 // Special case
682 if (facets.size() == 0)736 if (facets.size() == 0)
683 {737 {
@@ -724,18 +778,19 @@
724 const double value = data.w[data.facet_dofs[i]];778 const double value = data.w[data.facet_dofs[i]];
725 boundary_values[global_dof] = value;779 boundary_values[global_dof] = value;
726 }780 }
727
728 p++;781 p++;
729 }782 }
730}783}
731//-----------------------------------------------------------------------------784//-----------------------------------------------------------------------------
732void DirichletBC::compute_bc_geometric(Map& boundary_values,785void DirichletBC::compute_bc_geometric(Map& boundary_values,
733 BoundaryCondition::LocalData& data) const786 BoundaryCondition::LocalData& data) const
734{787{
735 dolfin_assert(_function_space);788 dolfin_assert(_function_space);
736 dolfin_assert(_function_space->element());789 dolfin_assert(_function_space->element());
737 dolfin_assert(g);790 dolfin_assert(g);
738791
792 init_facets();
793
739 // Special case794 // Special case
740 if (facets.size() == 0)795 if (facets.size() == 0)
741 {796 {
@@ -814,7 +869,11 @@
814 dolfin_assert(_function_space);869 dolfin_assert(_function_space);
815 dolfin_assert(_function_space->element());870 dolfin_assert(_function_space->element());
816 dolfin_assert(g);871 dolfin_assert(g);
817 dolfin_assert(_user_sub_domain);872
873 if (!_user_sub_domain)
874 dolfin_error("DirichletBC.cpp",
875 "computing Dirichlet boundary values, pointwise search",
876 "A SubDomain is required for pointwise search");
818877
819 // Get mesh and dofmap878 // Get mesh and dofmap
820 dolfin_assert(_function_space->mesh());879 dolfin_assert(_function_space->mesh());
@@ -828,9 +887,13 @@
828 // Create UFC cell object887 // Create UFC cell object
829 UFCCell ufc_cell(mesh);888 UFCCell ufc_cell(mesh);
830889
890 // Speeder-upper
891 std::pair<uint,uint> local_range = dofmap.ownership_range();
892 std::vector<bool> already_visited(local_range.second - local_range.first);
893 std::fill(already_visited.begin(), already_visited.end(), false);
894
831 // Iterate over cells895 // Iterate over cells
832 Progress p("Computing Dirichlet boundary values, pointwise search", mesh.num_cells());896 Progress p("Computing Dirichlet boundary values, pointwise search", mesh.num_cells());
833 Array<double> x(gdim);
834 for (CellIterator cell(mesh); !cell.end(); ++cell)897 for (CellIterator cell(mesh); !cell.end(); ++cell)
835 {898 {
836 // Update UFC cell899 // Update UFC cell
@@ -848,9 +911,17 @@
848 // Loop all dofs on cell911 // Loop all dofs on cell
849 for (uint i = 0; i < dofmap.cell_dimension(cell->index()); ++i)912 for (uint i = 0; i < dofmap.cell_dimension(cell->index()); ++i)
850 {913 {
851 // Check if the coordinates are part of the sub domain (calls user-defined 'indside' function)914 const uint global_dof = cell_dofs[i];
852 for (uint j = 0; j < gdim; ++j)915 if (global_dof >= local_range.first && global_dof < local_range.second)
853 x[j] = data.coordinates[i][j];916 {
917 const uint dof_index = global_dof - local_range.first;
918 if (already_visited[dof_index])
919 continue;
920 already_visited[dof_index] = true;
921 }
922
923 // Check if the coordinates are part of the sub domain (calls user-defined 'inside' function)
924 Array<double> x(gdim, &data.coordinates[i][0]);
854 if (!_user_sub_domain->inside(x, false))925 if (!_user_sub_domain->inside(x, false))
855 continue;926 continue;
856927
@@ -863,7 +934,6 @@
863 }934 }
864935
865 // Set boundary value936 // Set boundary value
866 const uint global_dof = cell_dofs[i];
867 const double value = data.w[i];937 const double value = data.w[i];
868 boundary_values[global_dof] = value;938 boundary_values[global_dof] = value;
869 }939 }
870940
=== modified file 'dolfin/fem/DirichletBC.h'
--- dolfin/fem/DirichletBC.h 2011-10-26 08:59:01 +0000
+++ dolfin/fem/DirichletBC.h 2012-02-29 14:31:44 +0000
@@ -17,9 +17,10 @@
17//17//
18// Modified by Kristian Oelgaard, 200718// Modified by Kristian Oelgaard, 2007
19// Modified by Johan Hake, 200919// Modified by Johan Hake, 2009
20// Modified by Joachim B Haga, 2012
20//21//
21// First added: 2007-04-1022// First added: 2007-04-10
22// Last changed: 2011-04-1323// Last changed: 2012-02-29
23//24//
24// FIXME: This class needs some cleanup, in particular collecting25// FIXME: This class needs some cleanup, in particular collecting
25// FIXME: all data from different representations into a common26// FIXME: all data from different representations into a common
@@ -94,8 +95,6 @@
94 /// three possibilties are "topological", "geometric" and95 /// three possibilties are "topological", "geometric" and
95 /// "pointwise".96 /// "pointwise".
9697
97 /// This class specifies the interface for setting (strong)
98
99 class DirichletBC : public BoundaryCondition, public Hierarchical<DirichletBC>98 class DirichletBC : public BoundaryCondition, public Hierarchical<DirichletBC>
100 {99 {
101100
@@ -301,6 +300,16 @@
301 void get_boundary_values(Map& boundary_values,300 void get_boundary_values(Map& boundary_values,
302 std::string method="default") const;301 std::string method="default") const;
303302
303
304 /// Get boundary values from neighbour processes. If a method other than
305 /// "pointwise" is used, this is necessary to ensure all boundary dofs are
306 /// marked on all processes.
307 ///
308 /// *Arguments*
309 /// boundary_values (boost::unordered_map<uint, double>)
310 /// Map from dof to boundary value.
311 void gather(Map& boundary_values) const;
312
304 /// Make rows of matrix associated with boundary condition zero,313 /// Make rows of matrix associated with boundary condition zero,
305 /// useful for non-diagonal matrices in a block matrix.314 /// useful for non-diagonal matrices in a block matrix.
306 ///315 ///
@@ -401,15 +410,18 @@
401 // Check input data to constructor410 // Check input data to constructor
402 void check() const;411 void check() const;
403412
413 // Initialize facets (from sub domain, mesh, etc)
414 void init_facets() const;
415
404 // Initialize sub domain markers from sub domain416 // Initialize sub domain markers from sub domain
405 void init_from_sub_domain(boost::shared_ptr<const SubDomain> sub_domain);417 void init_from_sub_domain(boost::shared_ptr<const SubDomain> sub_domain) const;
406418
407 // Initialize sub domain markers from MeshFunction419 // Initialize sub domain markers from MeshFunction
408 void init_from_mesh_function(const MeshFunction<uint>& sub_domains,420 void init_from_mesh_function(const MeshFunction<uint>& sub_domains,
409 uint sub_domain);421 uint sub_domain) const;
410422
411 // Initialize sub domain markers from mesh423 // Initialize sub domain markers from mesh
412 void init_from_mesh(uint sub_domain);424 void init_from_mesh(uint sub_domain) const;
413425
414 // Compute dofs and values for application of boundary conditions using426 // Compute dofs and values for application of boundary conditions using
415 // given method427 // given method
@@ -444,8 +456,13 @@
444 boost::shared_ptr<const SubDomain> _user_sub_domain;456 boost::shared_ptr<const SubDomain> _user_sub_domain;
445457
446 // Boundary facets, stored as pairs (cell, local facet number)458 // Boundary facets, stored as pairs (cell, local facet number)
447 std::vector<std::pair<uint, uint> > facets;459 mutable std::vector<std::pair<uint, uint> > facets;
448460
461 // User defined mesh function
462 boost::shared_ptr<const MeshFunction<uint> > _user_mesh_function;
463
464 // User defined sub domain marker for mesh or mesh function
465 uint _user_sub_domain_marker;
449 };466 };
450467
451}468}
452469
=== modified file 'dolfin/fem/DofMap.cpp'
--- dolfin/fem/DofMap.cpp 2011-11-18 12:14:50 +0000
+++ dolfin/fem/DofMap.cpp 2012-02-29 14:31:44 +0000
@@ -19,9 +19,10 @@
19// Modified by Kent-Andre Mardal, 200919// Modified by Kent-Andre Mardal, 2009
20// Modified by Ola Skavhaug, 200920// Modified by Ola Skavhaug, 2009
21// Modified by Niclas Jansson, 200921// Modified by Niclas Jansson, 2009
22// Modified by Joachim B Haga, 2012
22//23//
23// First added: 2007-03-0124// First added: 2007-03-01
24// Last changed: 2011-10-3125// Last changed: 2012-02-29
2526
26#include <dolfin/common/MPI.h>27#include <dolfin/common/MPI.h>
27#include <dolfin/common/NoDeleter.h>28#include <dolfin/common/NoDeleter.h>
@@ -173,6 +174,8 @@
173 // Modify dofmap for non-UFC numbering174 // Modify dofmap for non-UFC numbering
174 ufc_map_to_dofmap.clear();175 ufc_map_to_dofmap.clear();
175 _off_process_owner.clear();176 _off_process_owner.clear();
177 _shared_dofs.clear();
178 _neighbours.clear();
176 if (parent_dofmap.ufc_map_to_dofmap.size() > 0)179 if (parent_dofmap.ufc_map_to_dofmap.size() > 0)
177 {180 {
178 boost::unordered_map<uint, uint>::const_iterator ufc_to_current_dof;181 boost::unordered_map<uint, uint>::const_iterator ufc_to_current_dof;
@@ -196,6 +199,14 @@
196 boost::unordered_map<uint, uint>::const_iterator parent_off_proc = parent_dofmap._off_process_owner.find(*dof);199 boost::unordered_map<uint, uint>::const_iterator parent_off_proc = parent_dofmap._off_process_owner.find(*dof);
197 if (parent_off_proc != parent_dofmap._off_process_owner.end())200 if (parent_off_proc != parent_dofmap._off_process_owner.end())
198 _off_process_owner.insert(*parent_off_proc);201 _off_process_owner.insert(*parent_off_proc);
202
203 // Add to shared-dof process map, and update the set of neighbours
204 boost::unordered_map<uint, std::vector<uint> >::const_iterator parent_shared = parent_dofmap._shared_dofs.find(*dof);
205 if (parent_shared != parent_dofmap._shared_dofs.end())
206 {
207 _shared_dofs.insert(*parent_shared);
208 _neighbours.insert(parent_shared->second.begin(), parent_shared->second.end());
209 }
199 }210 }
200 }211 }
201 }212 }
@@ -309,6 +320,16 @@
309 return _off_process_owner;320 return _off_process_owner;
310}321}
311//-----------------------------------------------------------------------------322//-----------------------------------------------------------------------------
323const boost::unordered_map<uint, std::vector<uint> >& DofMap::shared_dofs() const
324{
325 return _shared_dofs;
326}
327//-----------------------------------------------------------------------------
328const std::set<uint>& DofMap::neighbours() const
329{
330 return _neighbours;
331}
332//-----------------------------------------------------------------------------
312void DofMap::tabulate_facet_dofs(uint* dofs, uint local_facet) const333void DofMap::tabulate_facet_dofs(uint* dofs, uint local_facet) const
313{334{
314 dolfin_assert(_ufc_dofmap);335 dolfin_assert(_ufc_dofmap);
315336
=== modified file 'dolfin/fem/DofMap.h'
--- dolfin/fem/DofMap.h 2011-11-18 12:14:50 +0000
+++ dolfin/fem/DofMap.h 2012-02-29 14:31:44 +0000
@@ -18,9 +18,10 @@
18// Modified by Martin Alnes, 200818// Modified by Martin Alnes, 2008
19// Modified by Kent-Andre Mardal, 200919// Modified by Kent-Andre Mardal, 2009
20// Modified by Ola Skavhaug, 200920// Modified by Ola Skavhaug, 2009
21// Modified by Joachim B Haga, 2012
21//22//
22// First added: 2007-03-0123// First added: 2007-03-01
23// Last changed: 2011-10-3124// Last changed: 2012-02-29
2425
25#ifndef __DOLFIN_DOF_MAP_H26#ifndef __DOLFIN_DOF_MAP_H
26#define __DOLFIN_DOF_MAP_H27#define __DOLFIN_DOF_MAP_H
@@ -178,6 +179,21 @@
178 /// The map from non-local dofs.179 /// The map from non-local dofs.
179 const boost::unordered_map<unsigned int, unsigned int>& off_process_owner() const;180 const boost::unordered_map<unsigned int, unsigned int>& off_process_owner() const;
180181
182 /// Return map from all shared dofs to the processes (not including the current
183 /// process) that share it.
184 ///
185 /// *Returns*
186 /// boost::unordered_map<uint, std::vector<uint> >
187 /// The map from dofs to list of processes
188 const boost::unordered_map<uint, std::vector<uint> >& shared_dofs() const;
189
190 /// Return set of all neighbouring processes.
191 ///
192 /// *Returns*
193 /// boost::unordered_set<uint>
194 /// The set of processes
195 const std::set<uint>& neighbours() const;
196
181 /// Local-to-global mapping of dofs on a cell197 /// Local-to-global mapping of dofs on a cell
182 ///198 ///
183 /// *Arguments*199 /// *Arguments*
@@ -346,6 +362,12 @@
346 // this process362 // this process
347 boost::unordered_map<uint, uint> _off_process_owner;363 boost::unordered_map<uint, uint> _off_process_owner;
348364
365 // List of processes that share a given dof
366 boost::unordered_map<uint, std::vector<uint> > _shared_dofs;
367
368 // Neighbours (processes that we share dofs with)
369 std::set<uint> _neighbours;
370
349 // True iff sub-dofmap (a view, i.e. not collapsed)371 // True iff sub-dofmap (a view, i.e. not collapsed)
350 bool _is_view;372 bool _is_view;
351373
352374
=== modified file 'dolfin/fem/DofMapBuilder.cpp'
--- dolfin/fem/DofMapBuilder.cpp 2011-11-18 12:14:50 +0000
+++ dolfin/fem/DofMapBuilder.cpp 2012-02-29 14:31:44 +0000
@@ -17,9 +17,10 @@
17//17//
18// Modified by Niclas Jansson 2009.18// Modified by Niclas Jansson 2009.
19// Modified by Garth N. Wells 2010.19// Modified by Garth N. Wells 2010.
20// Modified by Joachim B Haga, 2012.
20//21//
21// First added: 2008-08-1222// First added: 2008-08-12
22// Last changed: 2011-11-1423// Last changed: 2012-02-29
2324
24#include <ufc.h>25#include <ufc.h>
25#include <boost/random.hpp>26#include <boost/random.hpp>
@@ -87,18 +88,20 @@
87{88{
88 // Create data structures89 // Create data structures
89 DofMapBuilder::set owned_dofs, shared_owned_dofs, shared_unowned_dofs;90 DofMapBuilder::set owned_dofs, shared_owned_dofs, shared_unowned_dofs;
91 DofMapBuilder::vec_map shared_dof_processes;
9092
91 // Computed owned and shared dofs (and owned and un-owned)93 // Computed owned and shared dofs (and owned and un-owned)
92 compute_ownership(owned_dofs, shared_owned_dofs, shared_unowned_dofs,94 compute_ownership(owned_dofs, shared_owned_dofs, shared_unowned_dofs,
93 dofmap, global_dofs, mesh);95 shared_dof_processes, dofmap, global_dofs, mesh);
9496
95 // Renumber dofs owned dofs and received new numbering for unowned shared dofs97 // Renumber dofs owned dofs and received new numbering for unowned shared dofs
96 parallel_renumber(owned_dofs, shared_owned_dofs, shared_unowned_dofs,98 parallel_renumber(owned_dofs, shared_owned_dofs, shared_unowned_dofs,
97 dofmap, mesh);99 shared_dof_processes, dofmap, mesh);
98}100}
99//-----------------------------------------------------------------------------101//-----------------------------------------------------------------------------
100void DofMapBuilder::compute_ownership(set& owned_dofs, set& shared_owned_dofs,102void DofMapBuilder::compute_ownership(set& owned_dofs, set& shared_owned_dofs,
101 set& shared_unowned_dofs,103 set& shared_unowned_dofs,
104 vec_map& shared_dof_processes,
102 const DofMap& dofmap,105 const DofMap& dofmap,
103 const DofMapBuilder::set& global_dofs,106 const DofMapBuilder::set& global_dofs,
104 const Mesh& mesh)107 const Mesh& mesh)
@@ -174,6 +177,7 @@
174 {177 {
175 const uint received_dof = recv_buffer[i];178 const uint received_dof = recv_buffer[i];
176 const uint received_vote = recv_buffer[i + 1];179 const uint received_vote = recv_buffer[i + 1];
180
177 if (shared_owned_dofs.find(received_dof) != shared_owned_dofs.end())181 if (shared_owned_dofs.find(received_dof) != shared_owned_dofs.end())
178 {182 {
179 // Move dofs with higher ownership votes from shared to shared but not owned183 // Move dofs with higher ownership votes from shared to shared but not owned
@@ -191,6 +195,14 @@
191 "compute mapping of degrees of freedom",195 "compute mapping of degrees of freedom",
192 "Cannot decide on dof ownership; votes are equal");196 "Cannot decide on dof ownership; votes are equal");
193 }197 }
198
199 // Remember the sharing of the dof
200 shared_dof_processes[received_dof].push_back(src);
201 }
202 else if (shared_unowned_dofs.find(received_dof) != shared_unowned_dofs.end())
203 {
204 // Remember the sharing of the dof
205 shared_dof_processes[received_dof].push_back(src);
194 }206 }
195 }207 }
196 }208 }
@@ -240,6 +252,7 @@
240void DofMapBuilder::parallel_renumber(const set& owned_dofs,252void DofMapBuilder::parallel_renumber(const set& owned_dofs,
241 const set& shared_owned_dofs,253 const set& shared_owned_dofs,
242 const set& shared_unowned_dofs,254 const set& shared_unowned_dofs,
255 const vec_map& shared_dof_processes,
243 DofMap& dofmap, const Mesh& mesh)256 DofMap& dofmap, const Mesh& mesh)
244{257{
245 log(TRACE, "Renumber dofs for parallel dof map");258 log(TRACE, "Renumber dofs for parallel dof map");
@@ -317,6 +330,17 @@
317 }330 }
318 }331 }
319332
333 // Insert the shared-dof-to-process mapping into the dofmap, renumbering as necessary
334 for (vec_map::const_iterator it = shared_dof_processes.begin(); it != shared_dof_processes.end(); ++it)
335 {
336 boost::unordered_map<uint, uint>::const_iterator new_index = old_to_new_dof_index.find(it->first);
337 if (new_index == old_to_new_dof_index.end())
338 dofmap._shared_dofs.insert(*it);
339 else
340 dofmap._shared_dofs.insert(std::make_pair(new_index->second, it->second));
341 dofmap._neighbours.insert(it->second.begin(), it->second.end());
342 }
343
320 // Build new dof map344 // Build new dof map
321 for (CellIterator cell(mesh); !cell.end(); ++cell)345 for (CellIterator cell(mesh); !cell.end(); ++cell)
322 {346 {
323347
=== modified file 'dolfin/fem/DofMapBuilder.h'
--- dolfin/fem/DofMapBuilder.h 2011-11-24 17:51:44 +0000
+++ dolfin/fem/DofMapBuilder.h 2012-02-29 14:31:44 +0000
@@ -25,6 +25,7 @@
2525
26#include <set>26#include <set>
27#include <boost/unordered_set.hpp>27#include <boost/unordered_set.hpp>
28#include <boost/unordered_map.hpp>
28#include <dolfin/common/Set.h>29#include <dolfin/common/Set.h>
2930
30namespace ufc31namespace ufc
@@ -55,6 +56,8 @@
5556
56 typedef std::vector<dolfin::uint>::const_iterator vector_it;57 typedef std::vector<dolfin::uint>::const_iterator vector_it;
5758
59 typedef boost::unordered_map<uint, std::vector<uint> > vec_map;
60
58 public:61 public:
5962
60 static void build(DofMap& dofmap, const Mesh& dolfin_mesh,63 static void build(DofMap& dofmap, const Mesh& dolfin_mesh,
@@ -70,6 +73,7 @@
7073
71 static void compute_ownership(set& owned_dofs, set& shared_owned_dofs,74 static void compute_ownership(set& owned_dofs, set& shared_owned_dofs,
72 set& shared_unowned_dofs,75 set& shared_unowned_dofs,
76 vec_map& shared_dof_processes,
73 const DofMap& dofmap,77 const DofMap& dofmap,
74 const DofMapBuilder::set& global_dofs,78 const DofMapBuilder::set& global_dofs,
75 const Mesh& mesh);79 const Mesh& mesh);
@@ -77,6 +81,7 @@
77 static void parallel_renumber(const set& owned_dofs,81 static void parallel_renumber(const set& owned_dofs,
78 const set& shared_owned_dofs,82 const set& shared_owned_dofs,
79 const set& shared_unowned_dofs,83 const set& shared_unowned_dofs,
84 const vec_map& shared_dof_processes,
80 DofMap& dofmap, const Mesh& mesh);85 DofMap& dofmap, const Mesh& mesh);
8186
82 /// Compute set of global dofs (e.g. Reals associated with global87 /// Compute set of global dofs (e.g. Reals associated with global
8388
=== modified file 'dolfin/fem/GenericDofMap.h'
--- dolfin/fem/GenericDofMap.h 2011-09-27 07:14:59 +0000
+++ dolfin/fem/GenericDofMap.h 2012-02-29 14:31:44 +0000
@@ -15,8 +15,10 @@
15// You should have received a copy of the GNU Lesser General Public License15// You should have received a copy of the GNU Lesser General Public License
16// along with DOLFIN. If not, see <http://www.gnu.org/licenses/>.16// along with DOLFIN. If not, see <http://www.gnu.org/licenses/>.
17//17//
18// Modified by Joachim B Haga, 2012
19//
18// First added: 2010-05-2620// First added: 2010-05-26
19// Last changed: 2011-02-2321// Last changed: 2012-02-29
2022
21#ifndef __GENERIC_DOF_MAP_H23#ifndef __GENERIC_DOF_MAP_H
22#define __GENERIC_DOF_MAP_H24#define __GENERIC_DOF_MAP_H
@@ -108,6 +110,13 @@
108 /// Return the set of dof indices110 /// Return the set of dof indices
109 virtual boost::unordered_set<uint> dofs() const = 0;111 virtual boost::unordered_set<uint> dofs() const = 0;
110112
113 /// Return map from shared dofs to the processes (not including the current
114 /// process) that share it.
115 virtual const boost::unordered_map<uint, std::vector<uint> >& shared_dofs() const = 0;
116
117 /// Return set of all processes that share dofs with the current process.
118 virtual const std::set<uint>& neighbours() const = 0;
119
111 /// Re-number based on provided re-numbering map120 /// Re-number based on provided re-numbering map
112 virtual void renumber(const std::vector<uint>& renumbering_map) = 0;121 virtual void renumber(const std::vector<uint>& renumbering_map) = 0;
113122
114123
=== modified file 'dolfin/fem/SystemAssembler.cpp'
--- dolfin/fem/SystemAssembler.cpp 2012-02-22 09:40:00 +0000
+++ dolfin/fem/SystemAssembler.cpp 2012-02-29 14:31:44 +0000
@@ -16,9 +16,10 @@
16// along with DOLFIN. If not, see <http://www.gnu.org/licenses/>.16// along with DOLFIN. If not, see <http://www.gnu.org/licenses/>.
17//17//
18// Modified by Anders Logg 2008-201118// Modified by Anders Logg 2008-2011
19// Modified by Joachim B Haga 2012
19//20//
20// First added: 2009-06-2221// First added: 2009-06-22
21// Last changed: 2011-11-1422// Last changed: 2012-02-29
2223
23#include <dolfin/log/dolfin_log.h>24#include <dolfin/log/dolfin_log.h>
24#include <dolfin/common/Timer.h>25#include <dolfin/common/Timer.h>
@@ -168,20 +169,9 @@
168 DirichletBC::Map boundary_values;169 DirichletBC::Map boundary_values;
169 for (uint i = 0; i < bcs.size(); ++i)170 for (uint i = 0; i < bcs.size(); ++i)
170 {171 {
171 // Methods other than 'pointwise' are not robust in parallel since a vertex172 bcs[i]->get_boundary_values(boundary_values);
172 // can have a bc applied, but the partition might not have a facet on the boundary.
173 if (MPI::num_processes() > 1 && bcs[i]->method() != "pointwise")173 if (MPI::num_processes() > 1 && bcs[i]->method() != "pointwise")
174 {174 bcs[i]->gather(boundary_values);
175 if (MPI::process_number() == 0)
176 {
177 warning("Dirichlet boundary condition method '%s' is not robust in parallel with symmetric assembly.", bcs[i]->method().c_str());
178 //warning("Caution: 'on_boundary' does not work with 'pointwise' boundary conditions,");
179 }
180 bcs[i]->get_boundary_values(boundary_values);
181 //bcs[i]->get_boundary_values(boundary_values, "pointwise");
182 }
183 else
184 bcs[i]->get_boundary_values(boundary_values);
185 }175 }
186176
187 // Modify boundary values for incremental (typically nonlinear) problems177 // Modify boundary values for incremental (typically nonlinear) problems

Subscribers

People subscribed via source and target branches