[PVFS2-users] NFS and PVFS2

Dean Hildebrand dhildebz at eecs.umich.edu
Thu Oct 7 13:08:27 EDT 2004


Thanks guys, this is really helpful.  I'll talk to the NFS server guys
here and see what the possibilities are.  Does lowering the blocksize of
PVFS2 help at all or are there other issues going on?  Here I mean help
NFS over PVFS2, not help the performance of PVFS2.

Neill, could you clarify the following statement.  How could NFS over
PVFS2 do better than standalone PVFS2?
--
That said, iozone benchmarks running on NFS over PVFS2 still knock the
socks off standalone PVFS2, as it must be very smart about what needs to
actually be written to disk.  (In both cases, a number of ~4K blocks or
smaller are written through the pvfs2-client-core).
---

On Thu, 7 Oct 2004, Rob Ross wrote:

> So to summarize, the rsize/wsize values are not affecting the size of
> transfers between the NFSv4 server and the pvfs2-client, which is always
> small.  The best solution would be to find a way to get the NFSv4 server
> to use larger blocks when accessing PVFS2.
>
> Regards,
>
> Rob
>
> On Thu, 7 Oct 2004, Neill Miller wrote:
>
> > On Wed, 6 Oct 2004, Rob Ross wrote:
> >
> > > It would be helpful to get a log of the interactions between the NFSv4
> > > server and the pvfs2-client-core; are those 32K operations, or 4K
> > > operations, for example.  Also, when you say "server" you're saying that
> > > the PVFS2 server and NFSv4 server are on the same machine, right?  It's a
> > > single processor box?
> >
> > I've been looking at what's going on here and it looks like the worst case
> > is true.  For reference, I have not changed the block size of NFS, so if
> > 4K is the default, that's what I'm using for this.
> >
> > ====
> > maceva pvfs2 # df
> > Filesystem           1K-blocks      Used Available Use% Mounted on
> > <snip>
> > tcp://localhost:3334/pvfs2-fs
> >                       14635008  13836288    798720  95% /tmp/mnt
> > localhost:/tmp/mnt    14635008  13836288    798720  95% /tmp/nfs
> >
> >
> > [ FIRST: a copy from /dev/zero to the mounted pvfs2 volume ]
> > maceva linux-2.6 # time dd if=/dev/zero of=/tmp/mnt/pvfs2-file bs=4MB
> > count=1
> > 1+0 records in
> > 1+0 records out
> > 4000000 bytes transferred in 0.622527 seconds (6425423 bytes/sec)
> >
> > real    0m0.676s
> > user    0m0.002s
> > sys     0m0.019s
> >
> > [ SECOND: a copy from /dev/zero to the mounted nfs volume over pvfs2 ]
> > maceva linux-2.6 # time dd if=/dev/zero of=/tmp/nfs/nfs-file bs=4MB
> > count=1
> > 1+0 records in
> > 1+0 records out
> > 4000000 bytes transferred in 15.722940 seconds (254405 bytes/sec)
> >
> > real    0m15.820s
> > user    0m0.002s
> > sys     0m0.031s
> > ====
> >
> > I've verified that at least for my nfs over pvfs2 configuration (i.e.
> > nothing fancy), each pvfs2-client-core I/O request is in fact 4K or
> > smaller (there seem to be a number of 124 byte operations that complete a
> > previous 3972 byte operation -- which of course is expensive in the pvfs2
> > world).
> >
> > Dean, I suspect that even if you're using 32K block sizes, any kind of
> > bulk I/O is going to be much slower over NFS/PVFS2 rather than directly to
> > PVFS2.
> >
> > That said, iozone benchmarks running on NFS over PVFS2 still knock the
> > socks off standalone PVFS2, as it must be very smart about what needs to
> > actually be written to disk.  (In both cases, a number of ~4K blocks or
> > smaller are written through the pvfs2-client-core).
> >
> > Does this information help any?
> >
> > -Neill.
> >
> >
> _______________________________________________
> PVFS2-users mailing list
> PVFS2-users at beowulf-underground.org
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>


More information about the PVFS2-users mailing list