[PVFS2-users] mmap() support?

Rob Ross rross at mcs.anl.gov
Fri Oct 21 19:17:12 EDT 2005


mcuma wrote:
> 
> Here's the last part of strace of less before it gets stuck. Hope you 
> can make something out of this.
> ----------
> ......
> open("/usr/lib/locale/en_US/LC_NUMERIC", O_RDONLY) = 3
> fstat(3, {st_mode=S_IFREG|0644, st_size=59, ...}) = 0
> mmap(NULL, 59, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2aaaaaafc000
> close(3)                                = 0
> open("/dev/tty", O_RDONLY)              = 3
> ioctl(3, SNDCTL_TMR_TIMEBASE, {B38400 opost isig icanon echo ...}) = 0
> fsync(3)                                = -1 EINVAL (Invalid argument)
> ioctl(3, SNDCTL_TMR_STOP, {B38400 opost isig -icanon -echo ...}) = 0
> rt_sigaction(SIGINT, {0x4129c0, [INT], SA_RESTART|0x4000000}, {SIG_DFL}, 
> 8) = 0
> rt_sigaction(SIGTSTP, {0x412a00, [TSTP], SA_RESTART|0x4000000}, 
> {SIG_DFL}, 8) =0
> rt_sigaction(SIGWINCH, {0x412a40, [WINCH], SA_RESTART|0x4000000}, 
> {SIG_DFL}, 8)= 0
> pipe([4, 5])                            = 0
> vfork()                                 = 21100
> close(5)                                = 0
> fstat(4, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
> mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) 
> = 0x2aaaaaafd000
> read(4,
> ---------------------

If you run the "less" in a directory that is not a PVFS2 directory 
(local file system), does it behave the same way?  It doesn't look like 
it had even opened up a PVFS2 file?

> I also just noticed that copying TO PVFS2 is reasonably fast, but 
> copying FROM is slow. When I strace that, I get:

There is a known problem with GNU cp.  Basically cp asks the receiving 
file system what block size it would like to see and uses that size for 
transfers.  So, if you are copying *to* PVFS2, PVFS2 says "work with 
these big blocks", and transfers occur quickly.  If you are copying 
*from* PVFS2, the local file system says "ah, 4K accesses are fine" and 
performance is slow.

Neill sent a patch to the cp maintainer quite a while back, but I don't 
think they decided to apply it.

However, that doesn't look like what's happening here.  It looks like 
some new bug, possibly related to our attempts to fix the zero-fill 
bug(s).  We'll check into this some more.

Thanks again!

Rob

> -----------
> strace cp /scratch/parallel/mcuma/out .
> execve("/bin/cp", ["cp", "/scratch/parallel/mcuma/out", "."], [/* 76 
> vars */]) = 0
> uname({sys="Linux", node="delicatearch2", ...}) = 0
> brk(0)                                  = 0x50f000
> mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) 
> = 0x2aaaaaac0000
> ...
> lots of dynamic library loads, locales, etc.
> ...
> fstat(3, {st_mode=S_IFREG|0644, st_size=178468, ...}) = 0
> mmap(NULL, 178468, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2aaaaaae4000
> close(3)                                = 0
> geteuid()                               = 171310
> lstat(".", {st_mode=S_IFDIR|0751, st_size=8192, ...}) = 0
> stat(".", {st_mode=S_IFDIR|0751, st_size=8192, ...}) = 0
> stat("/scratch/parallel/mcuma/out", {st_mode=S_IFREG|0644, 
> st_size=78316, ...}) = 0
> stat("./out", {st_mode=S_IFREG|0644, st_size=65536, ...}) = 0
> open("/scratch/parallel/mcuma/out", O_RDONLY) = 3
> fstat(3, {st_mode=S_IFREG|0644, st_size=78316, ...}) = 0
> open("./crap", O_WRONLY|O_TRUNC)        = 4
> fstat(4, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
> fstat(3, {st_mode=S_IFREG|0644, st_size=78316, ...}) = 0
> read(3, "execve(\"/usr/bin/vi\", [\"vi\", \"te"..., 32768) = 32768
> write(4, "execve(\"/usr/bin/vi\", [\"vi\", \"te"..., 32768) = 32768
> read(3, "68) = 1228\nstat(\"/uufs/chpc.utah"..., 32768) = 32768
> write(4, "68) = 1228\nstat(\"/uufs/chpc.utah"..., 32768) = 32768
> read(3,
> ...
> then a long wait
> .....
>         0x7fffffff5f60, 32768)          = -1 EINTR (Interrupted system 
> call)
> read(3,
> .....
> and so on....
> ----------
> 
> I am wondering if something in my environment is not messing this up. 
> But I only load env. for compilers (PGI, Pathscale, Intel). And all this 
> was fine with 1.2.0. Maybe we messed something up during the install? 
> All seemed to go pretty smooth, though.
> 
> Thanks,
> MC
> 


More information about the PVFS2-users mailing list