[PVFS2-CVS] commit by robl in pvfs2-1/doc: pvfs2-faq.tex

CVS commit program cvs at parl.clemson.edu
Tue Mar 8 11:07:38 EST 2005

Update of /projects/cvsroot/pvfs2-1/doc
In directory parlweb:/tmp/cvs-serv28084

Modified Files:
Log Message:
this will soon be an entire document, but in the meantime, make it a FAQ entry

Index: pvfs2-faq.tex
RCS file: /projects/cvsroot/pvfs2-1/doc/pvfs2-faq.tex,v
diff -u -w -p -u -r1.28 -r1.29
--- pvfs2-faq.tex	26 Jan 2005 21:42:12 -0000	1.28
+++ pvfs2-faq.tex	8 Mar 2005 16:07:37 -0000	1.29
@@ -601,6 +601,39 @@ If you're looking for a quick suggestion
 to use, we suggest ext3 with ``journal data writeback'' option as a
 reasonable choice.
+\subsection{My app still runs more slowly than I would like.  What can I do?}
+If you ask the mailing list for help with performance, someone will probably
+ask you one or more of the following questions:
+\item Are you running servers and clients on the same nodes?  We support this
+      configuration -- sometimes it is required given space or budget
+      constraints.  You will not, however, see the best performance out of this
+      configuration.  See Section~\ref{sec:howmany-servers}. 
+\item Have you benchmarked your network?  A tool like netpipe or ttcp can help
+      diagnose point-to-point issues.  PVFS2 will tax your bisection bandwidth,
+      so if ppossible, run simultaneous instances of these network benchmarks
+      on multiple machine pairs and see if performance suffers.  One user
+      realized the cluster had a hub (not a switch, a hub) connecting all the
+      nodes.  Needless to say, performance was pretty bad. 
+\item Have you examined buffer sizes?  On linux, the settings /proc can make a
+      big difference in TCP performance.  Set
+      \texttt{/proc/sys/net/core/rmem\_default} and
+      \texttt{/proc/sys/net/core/wmem\_default} 
+Tuning applications can be quite a challenge.  You have disks, networks,
+operating systems, PVFS2, the application, and sometimes MPI.  We are
+working on a document to better guide the tuning of systems for
+IO-intensive workloads.

More information about the PVFS2-CVS mailing list