[PVFS2-users] Succes installing ... now what/how?
james.eastman at fedex.com
Tue May 17 15:01:49 EDT 2005
You folks are great! It looks like my thoughts about using PVFS2 might
have been a missed thought. Thanks so much for your help.
Rob Ross wrote:
> Hi James,
> PVFS2 isn't primarily designed for HA. It's really designed to combine
> locally attached storage on multiple machines into a single, fast file
> system for use in HPC.
> To get high availability, you're going to need to combine PVFS2 with
> heartbeat or some similar package *and* have shared storage (which you
> do, at least in "goal one"). There's a document in the doc directory of
> the source distribution that describes setting this sort of thing up.
> Honestly, I doubt that either of these applications are going to run
> well on PVFS2. PVFS2 does poorly with small accesses because it does
> not client-side cache. I'm guessing that both the PHP app and SVN are
> going to behave poorly in this respect.
> We're happy to continue to answer questions in any case. If you're
> looking for an alternative, you might want to try GFS.
> James Eastman wrote:
>> Many thanks for your timely response. I have two goals in using
>> the pvfs2 file system. Goal one:
>> Take my simple PHP app and prove that one CAN drop an OLD F5 load
>> balancer in front of 5 scavenged machines that have HBA connectivity
>> to a Zzyzx RocketStor SAN and provide a rather scaled and highly
>> available app NOT written in JAVA (my company/employer seems to be on
>> the JAVA addict band wagon of late).
>> AND goal 2 is:
>> To take 5 other scavenged machines and make a subversion code
>> repository that is, as well, highly available. The reason for the
>> SVN thought is that, again, my company/employer seems to think that
>> the only good/highly available code repositories/CM tools are the
>> ones that come form Borland (StarTeam). Yes, I support ALL of the
>> "but CVS is a better tool and SVN is better yet" thoughts.
>> As I mentioned, this is probably a REALLY simple thing in relation to
>> other, more advanced MPIO items. However, my thought is that if I
>> could make a proof then my headaches with the vendor noise could go
>> away. So, the simple answer to your question is ..... "I hope to
>> have '1 PVFS2 file system across 5 servers (useful)'. I then hope to
>> exercise this 1 PVFS2 file system in many new, and exciting ways :-)
>> . Again, thanks for your help.
>> James Eastman
>> "You know you've achieved perfection in design and development, not
>> when you have nothing more to add, but when you have nothing more to
>> take away." -- Antione de Saint-Exupery
>> Nathan Poznick wrote:
>>> Thus spake James Eastman:
>>>> I hope this post finds you doing well. I'm very new at this pvfs2
>>>> stuff and I have what is probably a simple question. If this
>>>> question has been answered already please feel free to point me to
>>>> the post(s) I should review and I promise to read/follow those
>>>> instructions. I am running a gentoo linux box with a Emulex lp9002
>>>> HBA. I have 4 other Gentoo boxes that i am planning on
>>>> installing/configuring EXACTLY the same as this firs box. My goal
>>>> is to have IO and metadata servers running on each box with EXACT
>>>> file systems mounted and in use. So ..... for my first feat I
>>>> thought I'd mount up my /tmp area as a pvfs managed file system.
>>>> I chose /tmp because I wanted to put a simple PHP app on each of my
>>>> grid machiens (oragrid 1 - oragrid5 by the way) and see if when one
>>>> of them wrote a session file to /tmp the others woyuld be able to
>>>> see and interact with said file. I also plan on making myself a
>>>> subversion grid .... and yet I digress. If I do a 'lsscsi' I see:
>>>> oragrid5 root # lsscsi
>>>> [0:0:0:0] disk Zzyzx VocSessionTemp 0281 /dev/sda
>>>> oragrid5 root #
>>>> So .... I fdisk /dev/sda (the device represented by my emulex card)
>>>> and I create a file system area that is the full size represented
>>>> by /dev/sda. I then decided to make the file system for the
>>>> /dev/sda devcice a xfs file system. So ..... I did a 'mkfs.xfs
>>>> /dev/sda' and my xfs file system was created. Now, I put an entry
>>>> in my /etc/fstab to make sure the file system would mount at boot
>>>> time and, as you might have expected, it did. When I do a df -m I see:
>>>> oragrid5 root # df -m
>>>> Filesystem 1M-blocks Used Available Use% Mounted on
>>>> /dev/ida/c0d0p3 7845 4534 2913 61% /
>>>> /dev/ida/c0d0p1 76 11 62 15% /boot
>>>> none 251 0 251 0% /dev/shm
>>>> /dev/sda4 1894 3 1891 1% /tmp
>>>> oragrid5 root #
>>>> Now, what entry do I put in my /etc/pvfs2tab file and what comand
>>>> should I type to mount my /dev/sda4 as a pvfs2 managed /tmp.
>>> Just a note, it's probably not a good idea to mount this on /tmp, since
>>> /tmp will be subject to various "auto-cleansing" processes on most
>>> systems (including wiping it on boot, which would be pretty bad if you
>>> reboot your nodes and lose your PVFS2 cluster). You may want to
>>> mount it on something like /pvfs2 and modify your php.ini to change the
>>> However, I think there may be confusion here - there's a distinction
>>> between the PVFS2 storage space (which is an opaque area the server
>>> to store files and metadata) and the mounted location of a PVFS2
>>> filesystem on a client.
>>> Do you want to create 5 independent PVFS2 filesystems each only
>>> consisting of a single server (not very useful), or do you want to
>>> create 1 PVFS2 filesystem across 5 servers (useful)?
>>> PVFS2-users mailing list
>>> PVFS2-users at beowulf-underground.org
>> PVFS2-users mailing list
>> PVFS2-users at beowulf-underground.org
> PVFS2-users mailing list
> PVFS2-users at beowulf-underground.org
"You know you've achieved perfection in design and development, not when you have nothing more to add, but when you have nothing more to take away." -- Antione de Saint-Exupery
More information about the PVFS2-users