Sunday, October 31, 2010

Solve crimes in your area

Get Started





====================================================================================================================== Things are moving forward for our first ever users conference, which will be held at the Le Meridien Parkhotel in Frankfurt on 14 March. I'm getting very excited about this, and it looks like we'll have at least 6 OGP members in attendance. The morning will consist of a short training course with a subset of our "A Day in the Life" training. This will cover a number of issues faced by systems and network administrators and how to use OpenNMS to solve them. The afternoon will consist of two tracks of one hour presentations on various aspects of OpenNMS, including using maps, syslog integration, reporting, using OpenNMS with Asterisk, etc. The cost will be 220€ with an early bird special of 199€ until 22 February. We've also reserved a block of rooms at the hotel which are first come, first serve. Registration information can be found here: http://www.nethinks.com/info/veranstaltungen/cal/event/20090314//list-75/tx_cal_phpicalendar//opennms_conference_europe_2009/ and as as always feel free to contact me with questions or comments. -T _______________________________________________________________________ Tarus Balog, OpenNMS Maintainer Main: +1 919 533 0160 The OpenNMS Group, Inc. Fax: +1 503 961 7746 Email: tar ... @opennms.org URL: http://www.opennms.org PGP Key Fingerprint: 8945 8521 9771 FEC9 5481 512B FECA 11D2 FD82 B45C ------------------------------------------------------------------------------ This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword _______________________________________________ Please read the OpenNMS Mailing List FAQ: http://www.opennms.org/wiki/index.php?page=MailingListFaq opennms-announce mailing list To *unsubscribe* or change your subscription options, see the bottom of this page: https://lists.sourceforge.net/lists/listinfo/opennms-announce

Saturday, October 30, 2010

Take care of your future

Inspire a generation





========================================================================================================================== [Geoff Whitten wrote] If there is no, or little, evidence linking Homo to African apes why do I have trouble picking Australopithecus from female Pan, and both of these from Homo habilis. Pongo, by contrast, is immediately and obviously different because of those big arched orbits [my reply] Not a cogent argument! Here is something of a counterexample: one might well have trouble picking a small, primitive marsupial from a rat. A kangaroo, by contrast, is immediately and obviously different! So, Pongo might have one or more striking autapomorphies which make it easily recognisable, and Homo more difficult to distinguish from Pan, yet Homo could be more closely related to Pongo. The resemblance between Homo and Pan could just be plesiomorphic. Stephen PS: This doesn't mean I support the "orangutan = Grehan theory"! :) ________________________________________ From: taxa ... @mailman.nhm.ku.edu [ taxa ... @mailman.nhm.ku.edu ] On Behalf Of John Grehan [ jgre ... @sciencebuff.org ] Sent: Friday, 4 September 2009 10:58 p.m. To: taxa ... @mailman.nhm.ku.edu Subject: Re: [Taxacom] molecular update There is no selection of characters other than restricting features to those that are either unique within the group being analyzed (species of large-bodied hominoids) or sufficiently rare in the outgroup (in this case all lesser apes and all Old World monkeys) to be considered derived for the in-group. If one relies on any morphological similarity to connect taxa then yes, one might get a different result due to the influence of primitive retentions. I don't know why Geoff has trouble picking Australopithecus from female Pan because he did not say why. I would have no trouble because the teeth are different (e.g. Australopithecus has thick enameled molars, posteriorly thickened palate, vertical, flattened zygoma with anteriorly oriented roots, not to mention the lack of a vertically raised supra-orbital torus across the glabella (between the eyes). The shape of the orbits of orangutans are not like those of gibbons in that the orangutan orbits are vertically oval. This is a unique feature among living taxa, and along with a narrow inter-orbital space, is shared with the fossil Sivapithecus. Interestingly, some australopiths also have vertically oval or ovoid orbits (chimps do not), including that hobbit fossil. Our studies have demonstrated that the cladistic morphological studies that seemed to back up the molecular data have many erroneous characters, especially for the chimpanzee relationship, and in one major study the genus Homo was not even included. For living taxa support for the chimpanzee relationship was limited to no more than 10 features, of which we could only corroborate two. In addition there has been agreement by one of the chimpanzee supporters on some of these errors (others no comment yet). But if one holds that the molecular evidence is necessarily the truth then none of these morphological issues matter since they are all, but default, independently uninformative. As independently uninformative (on phylogenetic relationships) morphology loses its predictive ability and therefore becomes phylogenetically meaningless, along with the entire fossil record. This is the elephant in the molecular room. John Grehan -----Original Message----- From: Geoff Witten [mailto: geof ... @rmit.edu.au ] Sent: Thursday, September 03, 2009 11:53 PM To: Stephen Thorpe; taxa ... @mailman.nhm.ku.edu ; John Grehan Subject: Re: [Taxacom] molecular update If there is no, or little, evidence linking Homo to African apes why do I have trouble picking Australopithecus from female Pan, and both of these from Homo habilis. Pongo, by contrast, is immediately and obviously different because of those big arched orbits. Like big gibbons. Perhaps they are even more closely related to each other (pongo and Hylobates) than to the African Hominidae, which in my mind should include Pan, Gorilla and Homo. Pongo and Homo are only close morphologically if you carefully select the morphological characters. Just thought someone should toss in the fact that there is abundant morphological evidence to back up the molecular if you select different morphological criteria. Geoff Geoff Witten Senior Lecturer in Anatomy Ph (03) 9925 7589 Fax 9467 8589 Stephen Thorpe < s.th ... @auckland.ac.nz > 4/09/09 12:31 >>> there is no evidence at all because morphology gives the 'wrong' answer No, no, no! That is not how evidence works - have you ever been on a jury (or in the dock!) Evidence that is 99% reliable can still give you the wrong answer (that is why it isn't 100% reliable!), but it is still 99% reliable evidence, not "no evidence at all" ... Stephen ________________________________________ From: taxa ... @mailman.nhm.ku.edu [ taxa ... @mailman.nhm.ku.edu ] On Behalf Of John Grehan [jgre ... @sciencebuff.org ] Sent: Friday, 4 September 2009 2:21 p.m. To: taxa ... @mailman.nhm.ku.edu Subject: Re: [Taxacom] molecular update Stephen, You have the argument correct. Your theorized response makes the point - that with the molecular theory there is no phylogenetic integration of the fossil and living taxa for human origins. And its not a matter of just 'no reliable evidence', there is no evidence at all because morphology gives the 'wrong' answer. John Grehan -----Original Message----- From: Stephen Thorpe [mailto: s.th ... @auckland.ac.nz ] Sent: Thursday, September 03, 2009 10:04 PM To: John Grehan; Taxacom Subject: RE: [Taxacom] molecular update John, If I understand you correctly, your argument is this: (1) Morphology supports a relationship between living humans and orangutans (probably in some people's cases more than others! :) (2) Molecular data contradict the human-orangutan relationship (3) The only evidence for relationships between living humans and fossil ancestors is morphological Therefore, if (2) wins over (1), then there is no reliable evidence for relationships between living humans and fossil ancestors Well, what are the possible responses? I think a " molecular person" could just stand firm and say that the evidence for establishing relationships involving fossil taxa is just not as good as for establishing relationships between extant taxa, so what? That was kind of obvious anyway, because fossils have fewer informative MORPHOLOGICAL characters than extant taxa ... Stephen ________________________________________ From: taxa ... @mailman.nhm.ku.edu [ taxa ... @mailman.nhm.ku.edu ] On Behalf Of John Grehan [jgre ... @sciencebuff.org ] Sent: Friday, 4 September 2009 1:45 p.m. To: Taxacom Subject: Re: [Taxacom] molecular update Here's something to think about that molecular systematists are going to have to figure out if they argue that the orangutan evidence is wrong because it conflicts with morphology. The morphological relationship with orangutans applies not only to humans, but also fossil hominids (australopiths). If this evidence is invalidated by the molecular theory then evolutionary theory is left with out any phylogenetic connection between the fossil and living representatives of the human lineage. If the orangutan similarities of humans and hominids is false then there is no empirical basis for accepting the reality of human similarities in fossil hominids either. So far the molecular theorists have sidestepped this problem. What a mess. John Grehan -----Original Message----- From: taxa ... @mailman.nhm.ku.edu [mailto: taxa ... @mailman.nhm.ku.edu ] On Behalf Of Jason Mate Sent: Thursday, September 03, 2009 3:28 PM To: Taxacom Subject: Re: [Taxacom] molecular update Maybe it will encourage one of the molecular supporters on this list to attempt to publish the knockout. If we were boxing Íd give it a go, alas it is by argumentation that we must feud and so I have to wait for more substantial emails to come. Maybe if you supplied the papers in question.... Jason _________________________________________________________________ Drag n' drop-Get easy photo sharing with Windows Live(tm) Photos. http://www.microsoft.com/windows/windowslive/products/photos.aspx _______________________________________________ Taxacom Mailing List Taxa ... @mailman.nhm.ku.edu http://mailman.nhm.ku.edu/mailman/listinfo/taxacom The Taxacom archive going back to 1992 may be searched with either of these methods: (1) http://taxacom.markmail.org Or (2) a Google search specified as: site:mailman.nhm.ku.edu/pipermail/taxacom your search terms here _______________________________________________ Taxacom Mailing List Taxa ... @mailman.nhm.ku.edu http://mailman.nhm.ku.edu/mailman/listinfo/taxacom The Taxacom archive going back to 1992 may be searched with either of these methods: (1) http://taxacom.markmail.org Or (2) a Google search specified as: site:mailman.nhm.ku.edu/pipermail/taxacom your search terms here _______________________________________________ Taxacom Mailing List Taxa ... @mailman.nhm.ku.edu http://mailman.nhm.ku.edu/mailman/listinfo/taxacom The Taxacom archive going back to 1992 may be searched with either of these methods: (1) http://taxacom.markmail.org Or (2) a Google search specified as: site:mailman.nhm.ku.edu/pipermail/taxacom your search terms here _______________________________________________ Taxacom Mailing List Taxa ... @mailman.nhm.ku.edu http://mailman.nhm.ku.edu/mailman/listinfo/taxacom The Taxacom archive going back to 1992 may be searched with either of these methods: (1) http://taxacom.markmail.org Or (2) a Google search specified as: site:mailman.nhm.ku.edu/pipermail/taxacom your search terms here _______________________________________________ Taxacom Mailing List Taxa ... @mailman.nhm.ku.edu http://mailman.nhm.ku.edu/mailman/listinfo/taxacom The Taxacom archive going back to 1992 may be searched with either of these methods: (1) http://taxacom.markmail.org Or (2) a Google search specified as: site:mailman.nhm.ku.edu/pipermail/taxacom your search terms here _______________________________________________ Taxacom Mailing List Taxa ... @mailman.nhm.ku.edu http://mailman.nhm.ku.edu/mailman/listinfo/taxacom The Taxacom archive going back to 1992 may be searched with either of these methods: (1) http://taxacom.markmail.org Or (2) a Google search specified as: site:mailman.nhm.ku.edu/pipermail/taxacom your search terms here

Friday, October 29, 2010

arbcombo -- Air Resources Board Public Hearing for Mandatory Reporting of Greenhouse Gas Emissions, December 16, 2010

The Air Resources Board will conduct a public hearing to consider
the Amendments to the Regulation for Mandatory Reporting of
Greenhouse Gas Emissions.

The Air Resources Board will conduct a public hearing to consider
the Amendments to the Regulation for Mandatory Reporting of
Greenhouse Gas Emissions.

This notice, the ISOR and all subsequent regulatory documents,
including the FSOR, when completed, are available on ARB's
website for this rulemaking at:

http://www.arb.ca.gov/regact/2010/ghg2010/ghg2010.htm

Inquiries concerning the substance of the proposed regulation may
be directed to the designated agency contact persons, Mr. Doug
Thompson, Manager of ARB Climate Change Reporting Section,
Planning and Technical Support Division at (916) 322-7062, or Mr.
Patrick Gaffney, Staff Air Pollution Specialist, at (916)
322-7303.


SUBMITTAL OF COMMENTS

Interested members of the public may also present comments orally
or in writing at the meeting, and comments may be submitted by
postal mail or by electronic submittal before the meeting. The
public comment period for this regulatory action will begin on
November 1, 2010. To be considered by the Board, written
comments, not physically submitted at the meeting, must be
submitted on or after November 1, 2010 and received no later than
12:00 noon on December 15, 2010, and must be addressed to the
following:

Postal mail: Clerk of the Board, Air Resources Board
1001 I Street, Sacramento, California 95814

Electronic submittal:
http://www.arb.ca.gov/lispub/comm/bclist.php


Thank you


You are receiving this single arbcombo email because you are a
subscriber to or have made a public comment to one or more of the
following lists: agriculture-sp, board, capandtrade, cc, cement,
chps, forestry, fuels, gas-trans, ghg-rep, ghgverifiers, ghg-ver,
glass, hydprod, landfills, manuremgmt, oil-gas, refineries, res,
sf6elec.

======================================================================
You are subscribed to one of the lists aggregated to make this
particular ARB combination listserve broadcast. To UNSUBSCRIBE:
Please go to http://www.arb.ca.gov/listserv/listserv.php and enter
your email address and click on the button "Display Email Lists."
To unsubscribe, please click inside the appropriate box to uncheck it
and go to the bottom of the screen to submit your request. You will
receive an automatic email message confirming that you have
successfully unsubscribed. Also, please read our listserve disclaimer
at http://www.arb.ca.gov/listserv/disclaim.htm .

The energy challenge facing California is real. Every Californian
needs to take immediate action to reduce energy consumption. For
a list of simple ways you can reduce demand and cut your energy
costs, visit the Flex Your Power website at www.fypower.org .
======================================================================

arbcombo -- Air Resources Board Public Hearing for California Cap and Trade Regulation, December 16, 2010

The Air Resources Board will conduct a public hearing to consider
the adoption of a Proposed California Cap on Greenhouse Gas
Emissions and Market-Based Compliance Mechanisms Regulation,
Including Compliance Offset Protocols.

The Air Resources Board will conduct a public hearing to consider
the adoption of a Proposed California Cap on Greenhouse Gas
Emissions and Market-Based Compliance Mechanisms Regulation,
Including Compliance Offset Protocols.

This notice and the associated regulatory materials can be
accessed from ARB's website at:

http://www.arb.ca.gov/regact/2010/capandtrade10/capandtrade10.htm

SUBMITTAL OF COMMENTS

Interested members of the public may also present comments orally
or in writing at the meeting, and comments may be submitted by
postal mail or by electronic submittal before the meeting. The
public comment period for this regulatory action will begin on
November 1, 2010. To be considered by the Board, written
comments, not physically submitted at the meeting, must be
submitted on or after November 1, 2010 and received no later than
12:00 noon on December 15, 2010, and must be addressed to the
following:

Postal mail: Clerk of the Board, Air Resources Board
1001 I Street, Sacramento, California 95814

Electronic submittal:
http://www.arb.ca.gov/lispub/comm/bclist.php

Inquiries concerning the substance of the proposed regulation may
be directed to Mr. Steve Cliff, Manager of the Program Evaluation
Branch, at (916) 322-7194 or Ms. Brieanne Aguila, Air Pollution
Specialist at (916) 324-0919.

Thank you

You are receiving this single arbcombo email because you are a
subscriber to or have made a public comment to one or more of the
following lists: board, capandtrade, cc.

======================================================================
You are subscribed to one of the lists aggregated to make this
particular ARB combination listserve broadcast. To UNSUBSCRIBE:
Please go to http://www.arb.ca.gov/listserv/listserv.php and enter
your email address and click on the button "Display Email Lists."
To unsubscribe, please click inside the appropriate box to uncheck it
and go to the bottom of the screen to submit your request. You will
receive an automatic email message confirming that you have
successfully unsubscribed. Also, please read our listserve disclaimer
at http://www.arb.ca.gov/listserv/disclaim.htm .

The energy challenge facing California is real. Every Californian
needs to take immediate action to reduce energy consumption. For
a list of simple ways you can reduce demand and cut your energy
costs, visit the Flex Your Power website at www.fypower.org .
======================================================================

arbcombo -- Air Resources Board has Posted the Following two Notices for the On-Road Regulation and Off-Road Regulation, December 16, 2010

Public Hearing to Consider the Adoption of Proposed Amendments to
the On-Road and Off-Road Regulations.

The Air Resources Board will conduct a public hearing to consider
the Adoption of Proposed Amendments to the Regulation to Reduce
Emissions of Diesel Particulate Matter, Oxides of Nitrogen and
Other Criteria Pollutants from In-Use On-Road Diesel-Fueled
Vehicles, the Heavy-Duty Vehicle Greenhouse Gas Emission
Reduction Measure, and the Regulation to Control Emissions from
In-Use On-Road Diesel-Fueled Heavy-Duty Drayage Trucks at Ports
and Intermodal Rail Yard Facilities

The notice, ISOR, and all subsequent regulatory documents,
including the FSOR, when completed, are available on the ARB's
website below for this rulemaking at

http://www.arb.ca.gov/regact/2010/truckbus10/truckbus10.htm


The Air Resources Board will conduct a public hearing to consider
the Proposed Amendments to the Regulations for In-Use Off-Road
Diesel-Fueled Fleets and Off-Road Large Spark Ignition Engine
Fleet Requirements

The notice, ISOR and all subsequent regulatory documents,
including the FSOR, when completed, are available on the ARB's
website below for this rulemaking at

http://www.arb.ca.gov/regact/2010/offroadlsi10/offroadlsi10.htm

SUBMITTAL OF COMMENTS

Interested members of the public may also present comments orally
or in writing at the meeting, and comments may be submitted by
postal mail or by electronic submittal before the meeting. The
public comment period for this regulatory action will begin on
November 1, 2010. To be considered by the Board, written
comments, not physically submitted at the meeting, must be
submitted on or after November 1, 2010 and received no later than
12:00 noon on December 15, 2010, and must be addressed to the
following:

Postal mail: Clerk of the Board, Air Resources Board
1001 I Street, Sacramento, California 95814

Electronic submittal:
http://www.arb.ca.gov/lispub/comm/bclist.php

Please note that the webpage provided above for electronic
submittal is for comments on the above On-Road and Off-Road
Regulations:

To ensure that all comments are properly considered and responded
to, please identify in the subject heading of each comment letter
the regulation(s) for which comments are being submitted.

Thank you


You are receiving this single arbcombo email because you are a
subscriber to or have made a public comment to one or more of the
following lists: ag, altdiesel, board, diesel-retrofit, ej,
ej-prp, gmbond, hdghg, hdsoftware, inuseidling, loco,
ms-mailings, offroad, onrdiesel, ordiesel, orspark, portable,
porttruck, railyard, sbidling, schoolbus, swcv, truck-idling,
tru, zeb.

======================================================================
You are subscribed to one of the lists aggregated to make this
particular ARB combination listserve broadcast. To UNSUBSCRIBE:
Please go to http://www.arb.ca.gov/listserv/listserv.php and enter
your email address and click on the button "Display Email Lists."
To unsubscribe, please click inside the appropriate box to uncheck it
and go to the bottom of the screen to submit your request. You will
receive an automatic email message confirming that you have
successfully unsubscribed. Also, please read our listserve disclaimer
at http://www.arb.ca.gov/listserv/disclaim.htm .

The energy challenge facing California is real. Every Californian
needs to take immediate action to reduce energy consumption. For
a list of simple ways you can reduce demand and cut your energy
costs, visit the Flex Your Power website at www.fypower.org .
======================================================================

Thursday, October 28, 2010

Learn to work with doctors

Start on certification.





======================================================================================================================= 2008/8/1 Wouter Bolsterlee : 2008-08-01 klockan 01:02 skrev Gabriel Burt: On Wed, Jul 30, 2008 at 5:10 PM, Wouter Bolsterlee wrote: 2008-07-30 klockan 19:14 skrev Gabriel Burt: We've released Banshee 1.2, a month and a half after the 1.0 release. It brings some great new features and lots of bug fixes and performance improvements. *puts on i18n coordination team hat* So, will you release 1.2.1 within 2 weeks to account for translation updates? Yes, we will do a 1.2.1 within two weeks for translation updates. Several translatability bugs were fixed yesterday, so I recommend grabbing the latest from trunk to translate. We're now in a string freeze to let translators work. I believe it's better to leave one or two weeks to our translator before official release than a later .1 release mainly for translation. I can't forget what a hurry when I heard there will a new 1.2 release a day after. But thanks anyway for such excellent work. Thanks for taking this seriously. It is appreciated. mvrgr, Wouter -- :wq mail uw ... @xs4all.nl web http://uwstopia.nl nobody loves me :: it's true :: not like you do -- portishead -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: This message was signed/encrypted using GnuPG. iD8DBQFIks3MP7QTTiUKY+sRAmaDAKDIjpgGaZ9fpBZIvUBU5YU37B5lBgCfU+Uq sRKKH3xzO9hxFp92bSR96y0= =QPk1 -----END PGP SIGNATURE----- _______________________________________________ Banshee-list mailing list Bans ... @gnome.org http://mail.gnome.org/mailman/listinfo/banshee-list _______________________________________________ gnome-i18n mailing list gnom ... @gnome.org http://mail.gnome.org/mailman/listinfo/gnome-i18n Hi, I have a small simple data frame (attached) - to compare diversity of insects encountered in disturbed and unditurbed site. What i have is the count of insects - the total number of times they were encountered over 30 monitoring slots. Can someone please check for me to make sure how the 'community data matrix' for the diversity function needs to be oriented so that i'm comparing the right sets. I know that community data matrices mustn't carry characters that aren't numbers. I replaced my sps. names sp1, sp2, sp3, etc., with just 1, 2, 3.... but not sure what I can replace 'dist' and 'undist' with! I try this to start with -- insects .div<-diversity( insects , index="shannon") insects .div [1] 0.7242788 0.7485246 0.7298712 0.9012085 1.0366280 0.9470281 1.0466542 [8] 1.0133127 0.6450332 & thats not what i want. Any advice on the matrix format or commands would be big help! Cheers, Manju V. Sharma Gardenwood East 3.1, Division of Biology, Imperial College London, Silwood Park Campus Ascot SL5 7PY UK Tel (O): 0044 207 5942360 (R): 0044 207 8520808 habitat dist undist 1 56 73 2 86 75 3 0 33 4 0 21 5 0 4 6 20 29 7 13 16 8 24 18 9 0 17 ______________________________________________ R-h ... @r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.

Wednesday, October 27, 2010

arbcombo -- ARB Chair's Seminar Series: "Cool Pavements for Cool Communities"

We are pleased to announce the next Series topic:

"Cool Pavements for Cool Communities".

Haley Gilbert,
Lawrence Berkeley National Laboratory
Melvin Pomerantz,
Lawrence Berkeley National Laboratory, and
Paulette Salisbury,
California Nevada Cement Association

Tuesday, November 9, 2010 1:30-2:30 pm PST
Byron Sher Auditorium, 2nd Floor, Cal/EPA Building
1001 I Street, Sacramento, California

Announcement and Presentation can be viewed at:
http://www.arb.ca.gov/research/seminars/seminars.htm

For "internal" users please check the internal webcast calendar
at:

http://epanet.ca.gov/broadcast/?bdo=1

For "external" users please check the external webcast calendar
at:

http://www.calepa.ca.gov/broadcast/?bdo=1

For your added convenience, while viewing the webcast,
presentations can be downloaded at:

http://www.arb.ca.gov/research/seminars/seminars.htm

Your e-mail questions will be aired during the
question & answer period following the presentations.

Webcast Viewers, e-mail your questions to:

auditoriurm@calepa.ca.gov

For more information on this seminar please contact:
Ash Lashgari, Ph.D. at (916) 323-1506 or klahgar@arb.ca.gov

For more information on this Seminar and Series please contact:
Peter Mathews at (916) 323-8711 or pmathews@arb.ca.gov

To receive notices for upcoming Seminars please go to:
http://www.arb.ca.gov/listserv/listserv.php
and sign up for the seminars list serve.


You are receiving this single arbcombo email because you are a
subscriber to or have made a public comment to one or more of the
following lists: ab32publichealth, arch-ctgs, capandtrade, cc,
cement, ceqa, climatechampions, cool-cars, localaction, research,
sb375, seminars, training.

======================================================================
You are subscribed to one of the lists aggregated to make this
particular ARB combination listserve broadcast. To UNSUBSCRIBE:
Please go to http://www.arb.ca.gov/listserv/listserv.php and enter
your email address and click on the button "Display Email Lists."
To unsubscribe, please click inside the appropriate box to uncheck it
and go to the bottom of the screen to submit your request. You will
receive an automatic email message confirming that you have
successfully unsubscribed. Also, please read our listserve disclaimer
at http://www.arb.ca.gov/listserv/disclaim.htm .

The energy challenge facing California is real. Every Californian
needs to take immediate action to reduce energy consumption. For
a list of simple ways you can reduce demand and cut your energy
costs, visit the Flex Your Power website at www.fypower.org .
======================================================================

Sunday, October 24, 2010

Relieve distress in patients

Get Educated





=========================================================================================================================== On 11/26/2009 04:31 AM, Hans de Goede wrote: Hi Doug, That is a lot of information in there, let me try to summarize it and please let me know if I've missed anything: 1) The default chunksize for raid4/5/6 is changing, this should not be a problem as we do not specify a chunksize when creating new arrays I thought we did specify a chunksize. Oh well, that just means our default raid array performance will improve dramatically. The old default of 64k was horrible for performance relative to the new 512k default. 4 disks on MB 5 disks on MB 4 disks on PM write read write read write read 64K 509.373 388.870 403.947 370.963 103.743 61.127 512K 502.123 498.510 460.817 487.720 113.897 111.980 MB = Motherboard ports PM = single eSATA port to a port multiplier Note: going from 4 disks to 5 disks on this one machine resulted in a performance drop which is a likely indicator that there were bus saturation issues between the memory subsystem and the southbridge and that 5 disks simply over saturated the southbridge's capacity. 2) The default bitmap chunk size changed, again not a problem as we don't use bitmaps in anaconda atm 3) We need to change the not using of a bitmap, we should use a bitmap by default except when the array will be used for /boot or swap. Correct. The typical /boot array is too small to worry about, it can usually be resynced in its entirety in a matter of seconds. Swap partitions shouldn't use a bitmap because we don't want the overhead of sync operations on the swap subsystem, especially since its data is generally speaking transient. Other filesystems, especially once you get to 10GB or larger, can benefit from the bitmap in the event of an improper shutdown. Questions: 1) What commandline option should we pass to "mdadm --create" to achieve this? --bitmap={none,internal} In the future if we opt for something other than the default bitmap chunk, then when the above is internal, we would also pass: --bitmap-chunk= 4) We need to start specifying a superblock version, and preferably version 1.1 No, we *must* start specifying a superblock version or else we will no longer be able to boot our machines after a clean install. The new default is 1.1, and I'm perfectly happy to use that as the default, but as far as I'm aware, the only boot loader that can use a 1.1 superblock based raid1 /boot partition is grub2, so all the other arches would not be able to boot and we would have to forcibly upgrade all systems using grub to grub2. 5) Specifying a superblock version of 1.1 will render systems non bootable, I assume this only applies to systems which have a raid1 /boot, so I guess that we need to specify a superblock version of 1.1, except when the raid set will be used for /boot, where we should keep using 0.9 Questions: 1) Is the above correct ? No, not quite. You can use superblock version 1.0 on /boot and grub will then work. Both version 0.90 and version 1.0 superblocks are at the end of the device and do not confuse boot loaders. Here's a summary of superblock format differences: Version 0.90: Stored at end of device Has no homehost field in the superblock but most recent versions of mdadm would hash the name of the machine and use that for half of the UUID, which provided a pseudo homehost entry Limited to 27 constituent devices Has no name field in the superblock Has a preferred-minor field in the superblock Does not contain sufficient information to distinguish between a superblock at the end of a whole device or a superblock at the end of a single partition on the whole device (aka, create a single partition on a drive that uses the whole drive, place a version 0.90 superblock on that drive, then you will be able to pass in either the whole disk or the partition to an mdadm assemble command and mdadm can't tell via the information in the superblock if you have passed in the right device). Common to all version 1.x superblocks: Has homehost and name fields (actually, one field with a max length of 32 chars) Full UUID is generated, none hashed, so more bits of randomness on UUID No limit to number of constituent devices Has no preferred-minor field in the superblock, but can be emulated by use of appropriate entry in name field Version 1.0: Located at end of device where version 0.90 superblocks are also located Contains sufficient information to differentiate between being a superblock for the whole device or just a partition on the device Version 1.1: Located at very beginning of device. If placed on a whole disk device, occupies the same space as the MBR and partition table and does not leave room for them. Data is offset after superblock, and as such the normal device can not be used to access the data, only the md device. Version 1.2: Located at beginning of device + 4K. This offset allows for the MBR and partition table to have the first 4K. This can, however, cause confusing situations when used on whole disk devices as you are able to partition the device, but the entire device is the raid device, so the partition is meaningless even if present. It does, however, allow for booting off of these devices (theoretically, I don't think anyone is doing so and I suspect even grub2 would need more work to make this operational). 6) When creating 1.1 superblock sets we need to pass in: --homehost= --name= -e{1.0,1.1,1.2} Questions 1) Currently when creating a set, we do for example: mdadm --create /dev/md0 --run --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 What would this look like with the new mdadm, esp, what would happen to the /dev/md0 argument ? The /dev/md0 argument is arbitrary. It could be /dev/md0, it could be /dev/md/foobar. However, if we insist on sticking with the old numbered device files, then it is certain that we should also do our best to make sure that the --name field we pass in is in the special format needed to get mdadm to automatically assume we want numbered devices. In this case, --name=0 would be appropriate. But this actually ignores a real situation that some of us use to get around the brokenness of anaconda for many releases now. I typically start any install by first burning the install image to CD, then booting into rescue mode, then hand running fdisk on all my disks to get the layout I want, then hand creating md raid arrays with the options I want, then hand creating filesystems on those arrays or swap spaces on those arrays with the options I want. Then I reboot in the install mode on the same CD, and when it gets to the disk layout, I specify custom layout and then I simply use all the filesystems and md raid devices I created previously. However, even if I use version 1.x superblocks, and even if I use named md raid arrays, anaconda always insists on ignoring the names I've given them and assigning them numbers. Of course, the numbers don't necessarily match up to the order in which I created them, so I have to guess at which numbered array corresponds to which named array (unless there are obvious hints like different sizes, but in the last instance I was doing this I had 7 arrays that were all the same size, each intended to be a root filesystem for a different version of either RHEL or Fedora). Then, once the install is all complete, I have to go back into rescue mode, remount the root filesystem, hand edit the mdadm.conf to use names instead of numbers, remake the initrd images (now dracut images), change any fstab entries, then I can finally use the names. Really, it's *very* annoying that this minor number dependence in anaconda has gone on so long. It was outdated 7 or 8 Fedora releases ago. If we can still specify which minor to use when creating a new array, even though that minor may change after the first reboot, then the amount of changes needed to the installer are minimal and we can likely do this without problems for RHEL-6. I don't understand. Please enlighten me as to these requirements on minor numbers in the installer. After all, it's not like there isn't a simple means of naming these things: If md raid device used for lvm pv, name it /dev/md/pv-# If md raid device used for swap, name it /dev/md/swap-# If md raid device used for /, name it /dev/md/root If md raid device used for any other data partition, name it /dev/md/ And it's not like anaconda doesn't already have that information available when its creating filesystem labels, so I'm curious why it's so hard to use names instead of numbers for arrays in anaconda? Regards, Hans On 11/26/2009 03:59 AM, Doug Ledford wrote: Please keep me on the Cc: as I'm not on this list. Upstream recently released mdadm-3.1.1, which I intend to include in Fedora soon. It finally updates three default settings that should have been updated a long time ago. The default chunk size for raid4/5/6 is now 512K. Anaconda needs to be updated to either leave the default alone or use 512K itself. In the past it has passed in 256K, but extensive performance testing shows that 512K is indeed the sweet spot on pretty much any SATA device, which simply due to SATA being the overwhelming majority of disks we run on today, it's sweet spot should be our default. It updates the default bitmap chunk to be at least 65536K when using an internal bitmap. Performance tests showed as much as a 10% performance penalty for the old default bitmap chunk (8192K). The new bitmap chunk reduces that performance penalty (although we don't have solid numbers on how much...I'll work on that). However, we've never used a bitmap by default on any arrays we create. That needs to change. The simple logic is this: no bitmap on /boot or any swap partitions, use a bitmap on anything else. If we need a bitmap chunk other than the default, I'll follow up here. It updates the default superblock format from the old, antiquated, deprecated version 0.90 superblock that we should have quit using years ago to version 1.1. This is the real kicker. Since anaconda has never actively set the superblock metadata version (even though we should have been using 1.1 long ago), it's now going to have to start. The reason is that unless you upgrade machines to use an md raid aware boot loader, such as grub2 for x86 although I have no idea what would work on non-x86 arches, version 1.1 superblocks will render all installs unbootable. More importantly though, unless the anaconda team decides to blindly set all superblocks back to the old version 0.90 format, this change necessitates more than just a change to controlling which version of 1.x superblock we use on any given array, but also a change to how we create and name arrays in general. Version 0.90 superblocks are from back in the day when we thought it was smart/reasonable to name arrays by number and to mount scsi devices in fstab by their /dev/ entry. That day has long since been gone, dead and buried. We switched filesystems to mount by label so they are immune to device number changes and similarly version 1.x superblocks totally do away with the preferred-minor field in the superblock. Instead, they have a homehost and name field that are used to control device *naming*, not numbering, and in a properly running version 1.x superblock system, the device numbers are not guaranteed to be static from boot to boot (although they usually are). This doesn't appear to be much problem for dracut, but as an example, I'm attaching the mkinitrd patch I have to apply to an F11 system after every mkinitrd update in order to get initrd images that mount by name properly. So, those are the major differences. Switching to any of the version 1.x superblocks necessitates that anaconda pass a few arguments that it hasn't in the past. Right now, these are the things anaconda is going to need to start passing in on any mdadm create commands (that I don't currently believe it does, but I haven't checked and could be wrong): --homehost= --name= -e{1.0,1.1,1.2} In addition, we should start passing the bitmap option as I outlined above. We will also likely need to set the HOMEHOST entry in mdadm.conf and possibly the AUTO entry in mdadm.conf as well. And this brings me to a different point. Hans asked me to comment on bz537329. I would suggest people look at my comments there for some additional explanation of why ideas like trying to make things work without mdadm.conf are probably a bad idea. So here are a few additional things that I think are worth taking into consideration. If an array is listed in mdadm.conf, then *every* item on the array line must match the array or else it will fail to start. This means that ARRAY lines that list things that can change by using mdadm --grow to change aspects of the array can result in the array failing to be found on the next reboot. Therefore, it would be best if each new ARRAY line we write includes nothing besides the name of the array, the metadata version, and the UUID. If an array is listed in mdadm.conf, then both the --homehost and --name settings will be overridden by the name in the mdadm.conf file, so do not depend on either having an effect for arrays listed in mdadm.conf. However, homehost and name are both used heavily any time the array is not listed in mdadm.conf so setting them correctly is still important. There are a number of common scenarios that make this important: you are carrying an array from machine to machine (like an external drive tower, or raid1 usb flash drive, etc.), when an array is visible to multiple hosts (like arrays built over SAN devices), or when you've built a machine to replace an existing machine and you temporarily install the drives from the machine being replaced in the new machine to copy data across in which case you are starting both your new array and the old array on the same machine. They are also relied upon heavily in order to attempt to satisfy those people that think the md raid stack should work without any mdadm.conf file at all. And there is a special case exception in the name field that is used to attempt to preserve back compatibility. The intersection of all these attempts to satisfy various needs is tricky. Here's how names are determined: 1) If the array is identified in mdadm.conf, the name from the ARRAY line is used. 2) If HOMEHOST has been set in the config a) If the array uses a version 0.90 superblock, check to see if the HOMEHOST has been encoded in the UUID via hash. If not, treat as foreign, if so, treat as local. b) For version 1.x superblocks check the homehost in the superblock against the set homehost. If they match, treat as local, else if the homehost in the superblock is not empty treat as named foreign else treat as foreign. 3) else a) for version 0.90 superblocks treat the array as foreign. b) for 1.x if homehost is set then named foreign else foreign. In case #1, the name as it's in the file is used. If the remainder of cases, local means to attempt to create the array with the requested number (in the case of 0.90 superblocks) or requested name (in the case of version 1.x superblocks). Foreign means that the array will be started with the requested name + a suffix. For example, version 0.90 superblock with preferred-minor of 0 would get created with a random *actual* minor number and the name /dev/md0_0 or md0_1 if md0_0 already exists, etc. A version 1.x superblock with the name root would get created as /dev/md/root_0. Named foreign is used whenever a version 1.x superblock can't be identified as local but it has a valid homehost entry in the superblock. The format attempt is /dev/md/homehost:name so that if you were to mount an array from workstation2:root on workstation1, it would be /dev/md/workstation2:root. There is a special exception for version 1.x superblock arrays. If the name field of the superblock contains a specially formatted name, then it will be treated as a request to create the device with a given minor number and name identical to an old version 0.90 superblock array. Those special case names are: a) a bare number (aka, 0) b) a bare name using standard number format (aka, md0 or md_d0) c) a full name using standard number format (aka, /dev/md0 or /dev/md_d0) If an array uses a name instead of a number, then the named entry created in /dev/md/ will be a symlink to a random numeric md device in /dev/. For example, /dev/md/root, since it's the first device started and since we start grabbing md devices at 127 and counting backwards when starting named devices, will almost always point to /dev/md127. The /dev/md127 file will be the real device file while the entries in /dev/md/ are always symlinks. This is in order to be consistent with the fact that our /sys/block entry will be md127 and our entry in /proc/mdstat will also be md127. This is because the current /sys/block setup does not allow /sys/block/md/root, only md. _______________________________________________ Anaconda-devel-list mailing list Anac ... @redhat.com https://www.redhat.com/mailman/listinfo/anaconda-devel-list -- Doug Ledford < dled ... @redhat.com > GPG KeyID: CFBFF194 http://people.redhat.com/dledford Infiniband specific RPMs available at http://people.redhat.com/dledford/Infiniband _______________________________________________ Anaconda-devel-list mailing list Anac ... @redhat.com https://www.redhat.com/mailman/listinfo/anaconda-devel-list

Monday, October 18, 2010

Prepare products for patients

Study at your own pace





================================================================================================================ On 26 Apr 2008, at 16:40, Guillaume CERQUANT wrote: On Apr 5, 2008, at 01:24 , Citizen wrote: On 4 Apr 2008, at 23:09, Matt Penna wrote: On Apr 4, 2008, at 5:50 PM, Michael Brian Bentley wrote: What clears all the Exposé shortcut key assignments? On the Exposé & Spaces preference panel, there normally are default assignments for the All windows, Application windows, and Show Desktop functions. Something on my machine clears 'em, sometimes without a reboot. 10.5.2 and 3GB on a MBP17,2, no haxies. Do you play World of Warcraft in full screen mode? That often does exactly what you describe. I've experienced the same problem, but I have never played World of Warcraft. That still doesn't rule out using full screen mode, in general, as the problem though. (Unless someone knows otherwise.) Idem for me. Occuring on a PowerBook and an intel iMac, in 10.5.2. I filed a bug. Radar ID: 5892202 On my system (10.5.2 PPC iMac) I've narrowed the cause down to using Time Machine. I haven't narrowed it down any further than that - i.e. if I need to do something particular in Time Machine, or if just using Time Machine in general causes the problem. So the common factor still seems to be Full Screen Mode - although this could be Crate Worship. - Dave ------ David Kennedy ( http://www.zenopolis.com ) It's time for This Week in OpenNMS < http://www.opennms.org >. In the last week, we did some more work in preparation for 1.8.1, worked a bit more on the iPhone/iPad app, and did a huge amount of bugfixing. Project Updates * *1.8: Current Release is 1.8.0* 1.8.0 is the current stable release, tagged June 7th. The first major stable release in the 1.8 series, it adds a whole slew of new features compared to 1.6. For a high-level overview, see the "New and Noteworthy" page on the OpenNMS wiki < http://www.opennms.org/wiki/New_and_noteworthy#New_in_OpenNMS_1.8 >. While we consider this release to be stable, a ton has changed. It is recommended that you back up your database, and test an upgrade on non-production hardware before moving to 1.8 in production. * *1.8: Inline Thresholding Regression* Inline thresholding was enabled in the default configs late in the 1.6 series, but it was never enabled by default in the 1.7 branch, so 1.8.0 was released */without/* inline thresholding enabled. This was changed with the fixing of bug #3912 < http://bugzilla.opennms.org/show_bug.cgi?id=3912 >, so be aware when merging configs when 1.8.1 comes out! * *1.8: Remote Poller Maps Updates* Matt, Donald, and I did a bunch of work on the remote poller maps, doing a huge amount of optimization of the queries used to pull poller data into the UI, adding support for Mapquest's click and double-click behaviour (center, and center+zoom), and adding support for multiple map types in the OpenLayers implementation. OpenLayers Maps * *1.8: Tons of Bug Fixes* I did a bunch of work going through Bugzilla this week, trying to close out bugs in preparation for 1.8.1, which will go into code slush next monday, and be released on the 12th. Bugs Fixed Since Last TWiO * #1181 < http://bugzilla.opennms.org/show_bug.cgi?id=1181 >: Collectin Windows disk space, trying to poll the CD drive * #1920 < http://bugzilla.opennms.org/show_bug.cgi?id=1920 >: javamail using authentication encodes the username and password twice * #1959 < http://bugzilla.opennms.org/show_bug.cgi?id=1959 >: Too many calls to getlocahost() * #2922 < http://bugzilla.opennms.org/show_bug.cgi?id=2922 >: PSQLException in poller backend: DB field length exceeded on remote location monitor status update * #2944 < http://bugzilla.opennms.org/show_bug.cgi?id=2944 >: java.lang.NullPointerException on KSC Graphs * #3124 < http://bugzilla.opennms.org/show_bug.cgi?id=3124 >: HttpMonitor doesn't check JSON repsonses for response-text * #3192 < http://bugzilla.opennms.org/show_bug.cgi?id=3192 >: New Feature: allow syslogd to bind to specific ipaddress * #3280 < http://bugzilla.opennms.org/show_bug.cgi?id=3280 >: Equallogic iSCSI array performance data * #3283 < http://bugzilla.opennms.org/show_bug.cgi?id=3283 >: Reparenting of iLO interface on HP servers not working with ESX4 * #3291 < http://bugzilla.opennms.org/show_bug.cgi?id=3291 >: provisiond : snmpinterfaces not created * #3296 < http://bugzilla.opennms.org/show_bug.cgi?id=3296 >: running the installer without the database running throws an exception about "The database server's error messages are not in English" * #3306 < http://bugzilla.opennms.org/show_bug.cgi?id=3306 >: 1.7.svn (fresh today) does not show service, availability on node page * #3514 < http://bugzilla.opennms.org/show_bug.cgi?id=3514 >: default datacollection-config.xml breaks alias length limit * #3536 < http://bugzilla.opennms.org/show_bug.cgi?id=3536 >: Unable to use the "percent sign (%)" in a notification text message * #3576 < http://bugzilla.opennms.org/show_bug.cgi?id=3576 >: Fix send-event.pl script to encode time in DateFormat.LONG * #3578 < http://bugzilla.opennms.org/show_bug.cgi?id=3578 >: Incorrect http content-type header for svg request * #3589 < http://bugzilla.opennms.org/show_bug.cgi?id=3589 >: Advanced Alarm Search some of the Sort by options don't work properly * #3598 < http://bugzilla.opennms.org/show_bug.cgi?id=3598 >: An Exception occurs when you try to create a surveillance category that already exists. * #3622 < http://bugzilla.opennms.org/show_bug.cgi?id=3622 >: Bugs with Hyperic servlets * #3624 < http://bugzilla.opennms.org/show_bug.cgi?id=3624 >: Rename from Import to Synch(cronize) in Prov Groups * #3632 < http://bugzilla.opennms.org/show_bug.cgi?id=3632 >: admin role is negated for users also in readonly role * #3637 < http://bugzilla.opennms.org/show_bug.cgi?id=3637 >: JasperException PWC6033 * #3644 < http://bugzilla.opennms.org/show_bug.cgi?id=3644 >: trying to add list of ip range to discover * #3656 < http://bugzilla.opennms.org/show_bug.cgi?id=3656 >: mib2opennms does not install in Debian Lenny * #3675 < http://bugzilla.opennms.org/show_bug.cgi?id=3675 >: Events missing page counter * #3722 < http://bugzilla.opennms.org/show_bug.cgi?id=3722 >: Debian Packages: opennms-contrib missing dependency for libxml-twig-perl * #3847 < http://bugzilla.opennms.org/show_bug.cgi?id=3847 >: Page Sequence Monitor still submitting multiple 'Cookie:' response headers even with BROWSER_COMPATIBILITY is enabled * #3869 < http://bugzilla.opennms.org/show_bug.cgi?id=3869 >: Data Collection Failed Event (dataCollectionFailed) not informative * #3871 < http://bugzilla.opennms.org/show_bug.cgi?id=3871 >: linkd not showing links between nodes and cisco switches * #3899 < http://bugzilla.opennms.org/show_bug.cgi?id=3899 >: Patch: allow per node filtering in notifications list * #3900 < http://bugzilla.opennms.org/show_bug.cgi?id=3900 >: dashboard user has security issues * #3901 < http://bugzilla.opennms.org/show_bug.cgi?id=3901 >: add support for click/double-click handlers * #3910 < http://bugzilla.opennms.org/show_bug.cgi?id=3910 >: Patch: display First Next Previous links in events list at the bottom of the page * #3912 < http://bugzilla.opennms.org/show_bug.cgi?id=3912 >: inline thresholding is no longer enabled by default * #3915 < http://bugzilla.opennms.org/show_bug.cgi?id=3915 >: regular expression "pattern"s in XSDs are not evaluated * #3923 < http://bugzilla.opennms.org/show_bug.cgi?id=3923 >: Stale location specific status change events should be deleted * #3924 < http://bugzilla.opennms.org/show_bug.cgi?id=3924 >: NPE handled too gracefully by MailAckProcessor in Ackd Goof-Up of the Week Jeff relayed this tale of the importance of having priorities in scrum this morning... :) I spent last week helping a telecoms industry client implement OpenNMS. Toward the end of the second day we decided to switch from Capsd discovery to Provisiond requisitions, so I wrote a small script that reads an inventory dump from their old NMS (CA Spectrum) and creates an OpenNMS requisition describing the same nodes. The database was full of junk events from before we'd put in place some new SNMP trap definitions, so I went about removing all that stuff manually before importing the new requisition. Several folks were going out for a beverage and invited me along, so I hurried in order not to hold them up. The next morning there were over 600 nodes in the system instead of the 391 I was expecting. I double-checked that Capsd was turned off, that Provisiond was not handling newSuspect events, and that there were in fact no such events in the database, then contacted the development team about my suspicion of a bug. After nearly an hour, the client mentioned that he was seeing some duplicate nodes that didn't have any events, alarms, or notifications associated with them. Only then did it dawn on me that, in my haste to get my hands on some suds, I had forgotten to delete the old nodes themselves before importing the new requisition! Upcoming Events * July 7th-9th, 2010: Ben will be speaking at OpenStreetMaps' State of the Map 2010 < http://stateofthemap.org/ >, in Girona, Spain. * July 21st, 2010: Tarus will be giving his "So, You Think You Want to Start an Open Source Business?" talk at OSCON < http://www.oscon.com/oscon2010/public/schedule/detail/13160 > * July 26th-30th, 2010: OpenNMS Dev-Jam 2010 < http://www.opennms.org/wiki/Dev-Jam_2010 > will be held at the University of Minnesota in Minneapolis, MN If you have anything to add to the events list, please let me know . Until Next Week... As always, if there's anything you'd like me to talk about in a future TWiO, or you just have a comment, criticism, or blocking bug closing machines that you'd like to share, don't hesitate to say hi . -- Benjamin Reed The OpenNMS Group http://www.opennms.org/ ------------------------------------------------------------------------------ This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first _______________________________________________ Please read the OpenNMS Mailing List FAQ: http://www.opennms.org/wiki/index.php?page=MailingListFaq opennms-announce mailing list To *unsubscribe* or change your subscription options, see the bottom of this page: https://lists.sourceforge.net/lists/listinfo/opennms-announce ------------------------------------------------------------------------------ This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first _______________________________________________ Please read the OpenNMS Mailing List FAQ: http://www.opennms.org/index.php/Mailing_List_FAQ opennms-discuss mailing list To *unsubscribe* or change your subscription options, see the bottom of this page: https://lists.sourceforge.net/lists/listinfo/opennms-discuss

Friday, October 15, 2010

Find your match

Many In Your Area





================================================================================================================================ Mark Hansen wrote: On 2/27/2010 11:57 AM, Frog wrote: I just added SeaMonkey 2.0.3 on my system. I anticipated that I would have to re-establish my ICONs for 1.1.8 on my desktop when I started this process. I, however, am having a memory moment--I can't remember where the mail short cut points to. The loading of the new 2.0.3 version is working just fine so far. Frog Thank you Mark for helping me again. You probably mean SM 1.1.18. If not, you should probably consider upgrading your 1.1 version to 1.1.18. Yes, I was talking about 1.1.18--sorry for my mistake. To launch SM 1.1.X and have it open the Mail & Newsgroups windows, just pass it the -mail command line option, as in: C:\seamonkey-1.1\seamonkey.exe -mail where C:\seamonkey-1.1 is the directory where SM 1.1.X is installed. Ok, I will have to think about this for a moment. I presently have 2 ICONs on my desktop--one opens the web page window for 2.0.3 and the other opens the web page for 1.1.18. That is what I want for the web pages. Now, I am attempting to create 2 additional ICONs on the desktop for opening mail in each version. The ICON for 2.0.3 (which for now will not be my default browser) was established when 2.0.3 was installed. When I click it, it keeps asking me if I want to make it my default browser. When I say no, it reverts (I think) to 1.1.18. I say this because--when I click Help and then About SeaMonkey (subsequent to clicking NO to the default question) I see a window that opens to 1.1.18. I'm not sure what is going on at the moment. I think I am sending this message using 2.0.3--but I'm not sure of that fact--because, after clicking help, I again see 1.1.18. Note: I allowed to installation to install in the default locations. I have the following under Documents and Settings: C:\Documents and Settings\ Frog \Application Data\Mozilla\ Extensions Profiles SeaMonkey I believe the Extensions and Profiles Folders are connected with 1.1.18, while SeaMonkey is associated with 2.0.3. I have the following under Program Files: C:\Program Files\Mozilla.org\SeaMonkey (followed by a number of sub folders. I believe this is associated with 1.1.18 C:\Program Files\SeaMonkey (followed by a number of sub folders). I believe this is associated with 2.0.3 Now I must determine how to do what you suggest. I am not sure that I can understand your recommendation--thus, I will likely be back with further questions. Note that you generally can't have both SM 2.X and SM 1.1.X running at the same time, so make sure you manage this (or look into the command line options needed to make this possible). I understand that both versions should not be active at the same time until I make the change suggested in an earlier exchange we had on this subject. Thanks for reminding me again. I still have to fix the server message retain for 2.0.3 and message delete for 1.1.18. Best Regards, Thanks for the help, Frog _______________________________________________ support-seamonkey mailing list supp ... @lists.mozilla.org https://lists.mozilla.org/listinfo/support-seamonkey

Thursday, October 14, 2010

Learn how to teach children

Start studying today.





======================================================================================================================= Hello, In this second article, Jeremy Grelle continues his exploration of Spring Faces with a sample application that demonstrates the Spring-centric integration approach. Here is an excerpt: The first article in this series< http://www.jsfcentral.com/articles/intro_spring_faces_1.html >introduced Spring Faces at a high, conceptual level. It examined how Spring Faces enables a Spring-centric approach to integration between JSF and Spring, allowing you to take advantage of the strengths of the JSF component model while retaining access to the full breadth of the de facto standard Spring programming model. It showed some of the advantages of assuming Spring Web Flow as the primary controller model for a JSF application, and began to examine the structure of the Spring Travel sample application that demonstrates the Spring-centric integration approach. In part 2, I'm going to pick up where we left off and dive into the code of the sample application to show how Spring Faces simplifies JSF development. Read the full article here: http://www.jsfcentral.com/articles/intro_spring_faces_2.html --- Kito D. Mann | twitter: kito99 | Author, JSF in Action Virtua, Inc. | http://www.virtua.com | JSF/Java EE training and consulting http://www.JSFCentral.com - JavaServer Faces FAQ, news, and info | twitter: jsfcentral +1 203-404-4848 x3 Sign up for the JSFCentral newsletter: http://oi.vresp.com/?fid=ac048d0e17 Hi, I figured out that this is indeed a 2008-JDBC-driver-"feature". The quick workaround is to use the older 2005-JDBC-driver. Medium-term we will change the related columns to datetime2(3) which means exactly the millisecond-precision we like. This simulates what Microsoft does before storing datetime: public static Timestamp roundTimestampLikeMicrosoft(Timestamp source) { // http://msdn.microsoft.com/de-de/library/ms187819.aspx if (source == null) { return null; } long time = source.getTime(); long remainder = time % 10; if (remainder == 0 || remainder == 3 || remainder == 7) { return source; //won't change } if (remainder < 2 || remainder > 8) { remainder = 0; } else if (remainder > 1 && remainder < 5) { remainder = 3; } else if (remainder > 4 && remainder < 9) { remainder = 7; } return new Timestamp((time / 10) + remainder); } For clarification: The problem with the new driver is NOT that the database stores at low precision (no difference to the old one, no bug). The problem arises when comparing (for a select). There seems to be an "optimization" in the new driver that leads to the described symptoms. Can't even say if this is a bug or a shot in the foot caused by wrong usage ... Regards. Karl Karl Eilebrecht Key- Work Consulting GmbH | Kriegsstr. 100 | 76133 Karlsruhe | Germany | www.key- work .de Fon: +49-721-78203-277 | E-Mail: karl ... @key-work.de | Fax: +49-721-78203-10 Key- Work Consulting GmbH Karlsruhe, HRB 108695, HRG Mannheim Geschäftsführer: Andreas Stappert, Tobin Wotring -----Ursprüngliche Nachricht----- Von: Eilebrecht, Karl (Key- Work ) Gesendet: Montag, 21. Juni 2010 07:44 An: de ... @ofbiz.apache.org Betreff: SQL-Server datetime issue Hi, this is only for people using Microsoft SQL-Server, no need to read this if you're using another database: We just stumbled across a problem while migrating to SQL Server 2008 R2 64 bit, newest JDBC-driver. It seems that the datetime datatype (used for timestamps) never was suitable for being part of a primary key due to rounding issues. http://msdn.microsoft.com/en-us/library/ms187819.aspx However it is used (i.e. some relations and some of our own tables) and we actually never faced any problems with that - up to now. With any older versions all worked fine. But under certain circumstances with the newest server/driver combination we can now reproduce an error when calling createOrStore twice with the same PK values right after another. The second statement results in a primary key violation. Similar problem when executing findByPrimaryKey(entityPkJustStored) - not found. Reason seems to be that the value itself is stored at a slightly lower precision that the comparison (second call) is processed. We're still investigating and are currently discussing two options: (1) use datetime2 (higher precision) (2) remove any "rounding-prone" datatypes like datetime etc. from primary keys (replace with integer types) Regards. Karl Karl Eilebrecht Key- Work Consulting GmbH | Kriegsstr. 100 | 76133 Karlsruhe | Germany | www.key- work .de< http://www.key-work.de > Fon: +49-721-78203-277 | E-Mail: karl ... @key-work.de | Fax: +49-721-78203-10 Key- Work Consulting GmbH, Karlsruhe, HRB 108695, HRG Mannheim Gesch?ftsf?hrer: Andreas Stappert, Tobin Wotring

Wednesday, October 13, 2010

Acquire the ability to do MRIs

Get Started





================================================================================================================ Ah, I thought that mail was lost forever due to its excessive length! I wrote the method up more on the scons wiki, and I've implemented it in more detail at work on live assets - it seems to work really well. The gist is queuing up a new builder inside a source scanner - contrary to what you say about adding new nodes during the build, scons doesn't seem to mind at all - the new nodes get added to the graph and built as usual, and the scanners all get run at the right times. _________________________________________________________ This email, including attachments, is private and confidential. If you have received this email in error please notify the sender and delete it from your system. Emails are not secure and may contain viruses. No liability can be accepted for viruses that may be transferred by this email or any attachment. The Walt Disney Company Limited. Registered Office: 3 Queen Caroline Street, Hammersmith, London W6 9PE. Registered in England and Wales. Registered No. 530051 -----Original Message----- From: users-return-1026729+George. Foot =disn ... @scons.tigris.org [mailto:users-return-1026729+George. Foot =disn ... @scons.tigris.org ] On Behalf Of Greg Noel Sent: 15 January 2009 17:24 To: use ... @scons.tigris.org Subject: Re: [scons-users] Target-driven builder creation On Dec 2, 2008, at 9:36 AM, Foot , George wrote: 1) Some builders output a range of files with filenames dependent on the content of their sources 2) Some builders don't know what sources they depend on until some of their other sources are available I was hoping that Gary would rise to this, as he has a wiki page that I'm not finding that discusses some partial solutions to this. But in brief, this is an area that SCons doesn't handle well. The real problem is that there is no supported method to add new elements to the graph during the build (dependencies, yes, in a limited manner, but not new nodes and executors). This is a lack that we will have to address eventually, but it's really tricky, and we have so much on our plate as it is that a major new element like this is unlikely for now. Hope this helps, -- Greg Noel, retired UNIX guru ------------------------------------------------------ http://scons.tigris.org/ds/viewMessage.do?dsForumId=1272&dsMessageId=102 6729 To unsubscribe, send an e-mail to [ user ... @scons.tigris.org ]. ------------------------------------------------------ http://scons.tigris.org/ds/viewMessage.do?dsForumId=1272&dsMessageId=1026918 To unsubscribe, send an e-mail to [ user ... @scons.tigris.org ].

Announcing the NEW LivePerson Homepage!

LivePerson Logo

Greetings from the LivePerson Team!

We're writing to tell you about some great new improvements to our homepage. Yes, it was time for a makeover, and we're excited to unveil the results! You'll find a better integration of the many services and features that LivePerson has to offer, complete with a fresh new look with improved navigation.

As a LivePerson Affiliate, here's what you'll need to know:

When's the launch? We're going live with our new look today!

What is changing?

  1. We will now be sharing our homepage with the B2B side of our business.
  2. You can find the link to the MyAp login from the Expert HomePage, linked from the right-hand side of the LivePerson homepage. However, we suggest bookmarking the MyAp login for your easy access: http://myap.liveperson.com/
  3. Returning clients will be cookied so that they are automatically informed about this change and directed to the experts they are looking for.
  4. Affiliate program will remain the same.

Thanks for helping us celebrate our new look!

Sincerely,

The LivePerson Team

30,000 experts, live, at LivePerson.com
Copyright © 2010 liveperson.com. All Rights Reserved.

We sent this e-mail to duncanjax@gmail.com because your communication settings indicate that you want to receive Account Notifications. Click here to unsubscribe.

Visit our Terms & Conditions, Privacy Policy or contact us if you have any questions.

To ensure you continue to receive our newsletters and special offers, be sure to add liveperson@advice.livepersonmail.com to your contact list or address book. Click here to learn how. Thanks!

 



Tuesday, October 12, 2010

cc -- October 21 California Carbon Capture and Storage Review Panel

The fourth meeting of the California Carbon Capture and Storage
Review Panel will be held on Thursday, October 21, 2010. The
California Energy Commission, the California Public Utilities
Commission, and the California Air Resources Board have formed
this panel to review carbon capture and storage (CCS) policy and
develop recommendations that could help guide legislation and
regulations regarding CCS in California. CCS has been identified
as a potential strategy for reducing greenhouse gas emissions
from major industrial sites.
Other state agencies interested and involved in the issue are the
California Department of Conservation and the California State
Water Resources Control Board.
THURSDAY, OCTOBER 21, 2010
8:30 a.m. – 5:30 p.m.
CALIFORNIA ENERGY COMMISSION
1516 Ninth Street
First Floor, Hearing Room A
Sacramento, California
(Wheelchair Accessible)
Public parking is available in the state-owned garage on 10th
Street between O and P Streets (entrance on 10th), in metered
spaces on area streets, and in the public parking garages on L
Street between 10th and 11th Streets and on P Street between 11th
and 12th Streets.
Remote Attendance and Availability of Documents
Internet Webcast - Presentations and audio from the meeting will
be broadcast via our WebEx web meeting service. For details on
how to participate via WebEx, please see the "Remote Attendance"
section toward the end of this notice.

Documents and presentations for this meeting will be available
on-line at
www.climatechange.ca.gov/carbon_capture_review_panel/meetings/index.html


Purpose
The goal of the California CCS Review Panel is to:

1. Identify, discuss, and frame specific policies addressing the
role of CCS in meeting the state's energy needs and greenhouse
gas reduction goals;

2. Review CCS policy frameworks used elsewhere, and identify
gaps, alternatives, and applicability in California; and

3. Develop specific committee recommendations on CCS.

The fourth meeting of the panel will focus on formulating
regulatory, legislative and policy recommendations on CCS for
California. As part of this process the panel will consider the
presentations and comments given in the first three panel
meetings on the various regulatory, statutory, and policy issues
confronting CCS in California from the perspective of different
experts in relevant areas, important stakeholders, and members of
the general public. The panel will also utilize staff white
papers that have been requested by the panel, currently posted
on-line at
www.climatechange.ca.gov/carbon_capture_review_panel/meetings/2010-08-18/white_papers
These papers cover a variety of topics of importance to CCS
including questions on permitting and agency lead for CCS;
primacy (state versus federal); long-term stewardship and
liability; enhanced oil recovery-related issues; pore space
issues; pipeline issues; saline storage; sequestration history
and risk; monitoring, measurement, and verification procedures
and protocols; the connection between CCS and Assembly Bill 32,
the California Global Warming Solutions Act of 2006; and public
outreach issues connected to CCS. The papers given to the panel
have received review and input from interested state agencies.
In addition to panel deliberations a period of time will be set
aside for open public comments to the panel. The panel
deliberations and information gathered through the public
meetings will be the basis upon which a final report will be
developed by the panel that identifies the major regulatory and
legal barriers to CCS in the state, and gives specific
recommendations regarding methods to address them and the policy
rationales for the recommendations.
Background
CCS refers to the capture, or removal, of CO2 at large industrial
sources and its subsequent compression, transport, and injection
into the subsurface for long-term or permanent storage. CCS is
one option in a portfolio of mitigation tools to reduce
greenhouse gas emissions. Energy efficiency and renewable energy
will remain cornerstones of California's efforts to control
greenhouse gases; however, CCS could also play a role in helping
California reach its greenhouse gas reduction goals if statutory
and regulatory ambiguities are addressed and a consistent policy
framework is established. Such a framework should clearly
establish the authorities and roles of various state agencies,
facilitate and streamline permitting processes, support the
development of favorable business cases for adoption of the
technology at a commercial scale, and serve the public's interest
in assuring climate change mitigation goals are met while
protecting the environment and human health and safety.
Panel Members
The following experts comprise the California CCS Review Panel:
Carl Bauer, Retired Director of the National Energy Technology
Laboratory and Chairman CCS Review Panel
Sally Benson, Director Global Climate & Energy Program (GCEP),
Stanford University
Kipp Coddington, Partner, Mowrey Meezan Coddington Cloud LLP
(M2C2)
John Fielder, President, Southern California Edison
John King, Chairman, North American Carbon Capture & Storage
Association and Environment Implementation Manager, Royal Dutch
Shell
Kevin Murray, Managing Partner, The Murray Group
George Peridas, Scientist, Climate Center, Natural Resources
Defense Council
Catherine Reheis-Boyd, President, Western States Petroleum
Association
Edward Rubin, Professor of Engineering & Public Policy, Carnegie
Mellon University
Dan Skopec, Chair, California Carbon Capture and Storage
Coalition
Panel members were chosen because of their strong interest and
record of accomplishment in developing energy and environmental
public policy.
Public Comments
Written comments on the workshop topics must be submitted by 5:00
p.m. on
October 28, 2010. Please indicate California Carbon Capture and
Storage Review Panel Meeting in the subject line or first
paragraph of your comments. Address comments to Carl Bauer,
Chairman, CCS Review Panel, in care of John Reed. Please hand
deliver or mail an original copy to:

California Energy Commission
Energy Research & Development Division
Public Interest Energy Research Program
1516 Ninth Street, MS 47
Sacramento, CA 95814-5512

The Energy Commission encourages comments by e-mail. Those
submitting comments by electronic mail should provide them in
either Microsoft Word format or as a Portable Document (PDF) to
jreed@energy.state.ca.us. Please include your name or the name
of your organization within the name of the Word document or PDF
file.

Participants may also provide an original copy at the beginning
of the meeting. All written materials relating to this workshop
will become part of the public record in this proceeding. Time
will be set aside at the meeting for oral comments by the
public.
Public Participation
The Energy Commission's Public Adviser's Office provides the
public assistance in participating in Energy Commission
activities. If you want information on how to participate in this
forum, please contact the Public Adviser's Office at (916)
654-4489 or toll free at (800) 822-6228, by FAX at (916)
654-4493, or by e-mail at [PublicAdviser@energy.state.ca.us]. If
you have a disability and require assistance to participate,
please contact Lou Quiroz at (916) 654-5146 at least five days in
advance.

Please direct all news media inquiries to the Media and Public
Communications Office at (916) 654-4989, or by e-mail at
[mediaoffice@energy.state.ca.us].

If you have technical or logistical questions about the meeting,
please contact John Reed at (916) 653-7963, or e-mail your
question to [jreed@energy.state.ca.us].
Remote Attendance

You can participate in this meeting through WebEx, the Energy
Commission's on-line meeting service. Presentations will appear
on your computer screen, and you listen to the audio via your
telephone. Please be aware that the meeting's WebEx audio and
on-screen activity may be recorded.

Computer Log-on with Telephone Audio:
1. Please go to https://energy.webex.com and enter the unique
meeting number: 920 348 075

2. When prompted, enter your name and other information as
directed and the meeting password: meeting@9

3. After you log-in, a prompt will ask for your phone number. If
you wish to have WebEx call you back, enter your phone number.
This will add your name on the WebEx log so that we know who is
connected and have a record of your participating by WebEx.

If you do not wish to do that, click cancel, and go to step 4.
Or, if your company uses an older switchboard-type of phone
system where your line is an extension, click cancel and go to
step 4.

4. If you didn't want WebEx to call you back, then call
1-866-469-3239 (toll-free in the U.S. and Canada). When prompted,
enter the meeting number above and your unique Attendee ID
number, which is listed in the top left area of your screen after
you login via computer. International callers can dial in using
the "Show all global call-in numbers" link (also in the top left
area).

Telephone Only (No Computer Access):
1. Call 1-866-469-3239 (toll-free in the U.S. and Canada) and
when prompted enter the unique meeting number above.
International callers can select their number from
https://energy.webex.com/energy/globalcallin.php

If you have difficulty joining the meeting, please call the WebEx
Technical Support number at 1-866-229-3239.


======================================================================
You are subscribed to the cc mailing list. To UNSUBSCRIBE:
Please go to http://www.arb.ca.gov/listserv/listserv.php and enter
your email address and click on the button "Display Email Lists."
To unsubscribe, please click inside the appropriate box to uncheck it
and go to the bottom of the screen to submit your request. You will
receive an automatic email message confirming that you have
successfully unsubscribed. Also, please read our listserve disclaimer
at http://www.arb.ca.gov/listserv/disclaim.htm .

The energy challenge facing California is real. Every Californian
needs to take immediate action to reduce energy consumption. For
a list of simple ways you can reduce demand and cut your energy
costs, see our website at www.arb.ca.gov.
======================================================================

Monday, October 11, 2010

Assign codes for classification

Become part of a team





======================================================================================================================= My MacBook Pro appears to be up to date with regard to firmware, but do you happen to know what firmware upgrades were involved. The problem is very perplexing, and really limiting my ability to do the things I need to do. Knox David Mohr wrote: On Mon, Jun 29, 2009 at 6:02 PM, Knox Long [1]< lo ... @stsci.edu > wrote: I am seeing identical problems to those described by David Mohr using the NX client trying to connect to the free version of NX. The problem, a connection timeout appeared as soon I upgraded to OS X 10.5.7 on my intel Mac. Moreover, I saw essentially the same connection timeout problem as soon as I upgraded to a VNCServer, with no trace of an attempted log in on the server side. ssh and ssh tunneling all seem to work fine. Has anyone understood what the problem is. Thanks. I can just tell you that on one of our machines the issues magically disappeared after a firmware update (it's a laptop), while on another one they remained. We still have no clue what's going on. ~David ________________________________________________________________ Were you helped on this list with your FreeNX problem? Then please write up the solution in the FreeNX Wiki/FAQ: [2] http://openfacts2.berlios.de/wikien/index.php/BerliosProject:FreeNX_-_FAQ Don't forget to check the NX Knowledge Base: [3] http://www.nomachine.com/kb/ ________________________________________________________________ FreeNX-kNX mailing list --- [4] Free ... @kde.org [5] https://mail.kde.org/mailman/listinfo/freenx-knx ________________________________________________________________ References Visible links 1. mailto: lo ... @stsci.edu 2. http://openfacts2.berlios.de/wikien/index.php/BerliosProject:FreeNX_-_FAQ 3. http://www.nomachine.com/kb/ 4. mailto: Free ... @kde.org 5. https://mail.kde.org/mailman/listinfo/freenx-knx begin:vcard fn:Knox Long n: Long ;Knox org:Space Telescope Science Institute adr:;;3700 San Martin Drive;Baltimore;MD;21218;USA email;internet: lo ... @stsci.edu tel;cell:1-410-322-9222 x-mozilla-html:TRUE version:2.1 end:vcard ________________________________________________________________ Were you helped on this list with your FreeNX problem? Then please write up the solution in the FreeNX Wiki/FAQ: http://openfacts2.berlios.de/wikien/index.php/BerliosProject:FreeNX_-_FAQ Don't forget to check the NX Knowledge Base: http://www.nomachine.com/kb/ ________________________________________________________________ FreeNX-kNX mailing list --- Free ... @kde.org https://mail.kde.org/mailman/listinfo/freenx-knx ________________________________________________________________

Sunday, October 10, 2010

Search pics and profiles for someone compatible

Start searching profiles today.





======================================================================================================================= It turns out that I can no longer simply copy repeating nodes from outbound to inbound and expect "the right thing" to happen. When I put an explicit loop I get the correct structure on the way out. Can't say I am happy about it but I can live with it. Regards Michael ARMC A RED MEDICAL CENTRE BIGH A BIG HOSPITAL D204 Blue Mountains Hospital D210 Nepean Hospital D214 Springwood Hospital D230 Tresillian . . . Michael Czapski, Principal Field Technologist, ANZ APS, SOA/BI/Java CAPS wrote: After a while of working with the old Java CAPS 6 Repository technologies here I am back to GlassFish ESB v2.1. Something easy, to start with. I have a XML Schema, FacList/src/FacList.xsd, and a WSDL that uses it, FacList/src/FacListSvc.wsdl. The response structure looks like: The BPEL populates the structure from the database. I expect a response instance document to look like (repeating FacList with FacCode and FacDescripton nested inside it): What I am getting instead, whether I test the project using Soap UI or JUnit from the CA, is: ARMC BIGH D204 D210 D214 D230 D754 ICPMR NSMC STC TBIGH A RED MEDICAL CENTRE A BIG HOSPITAL Blue Mountains Hospital Nepean Hospital Springwood Hospital Tresillian Governor Phillip ICPMR NORTH SYDNEY MEDICAL CENTRE SYDNEY TECHNICAL HOSPITAL THE BIG HOSPITAL Clearly, wrong. This is my first test of v2.1. It annoys me greatly that the simplest thing is not behaving as I expect it. Please advise. Michael -- -- < http://www.sun.com/books/catalog/java_caps.xml > Podcast 1 < http://mediacast.sun.com/users/Michael.Czapski-Sun/media/JavaCAPS_Czapski_Marry_P1of2Java/details > Podcast 2 < http://mediacast.sun.com/users/Michael.Czapski-Sun/media/JavaCAPS_Czapski_Marry_P2of2Java/details > *Michael Czapski, BSc Computing, MSc eBus.Tech.* Principal Field Technologist, Software SOA/BI/Java CAPS *Sun Microsystems* 33 Berry Street, North Sydney NSW 2060 Australia Phone +61 2 9466 9427 Email Mich ... @Sun.Com Blog: http://blogs.sun.com/javacapsfieldtech/ LinkedIn: MichaelCzapski < http://www.linkedin.com/in/michaelczapski > Skype: michaelczapski Screencasts and Document Archives: http://mediacast.sun.com/users/Michael.Czapski-Sun ------------------------------------------------------------------------ --------------------------------------------------------------------- To unsubscribe, e-mail: user ... @open-esb.dev.java.net For additional commands, e-mail: user ... @open-esb.dev.java.net -- -- < http://www.sun.com/books/catalog/java_caps.xml > Podcast 1 < http://mediacast.sun.com/users/Michael.Czapski-Sun/media/JavaCAPS_Czapski_Marry_P1of2Java/details > Podcast 2 < http://mediacast.sun.com/users/Michael.Czapski-Sun/media/JavaCAPS_Czapski_Marry_P2of2Java/details > *Michael Czapski, BSc Computing, MSc eBus.Tech.* Principal Field Technologist, Software SOA/BI/Java CAPS *Sun Microsystems* 33 Berry Street, North Sydney NSW 2060 Australia Phone +61 2 9466 9427 Email Mich ... @Sun.Com Blog: http://blogs.sun.com/javacapsfieldtech/ LinkedIn: MichaelCzapski < http://www.linkedin.com/in/michaelczapski > Skype: michaelczapski Screencasts and Document Archives: http://mediacast.sun.com/users/Michael.Czapski-Sun On Wed, 2008-09-24 at 22:58 -0700, wond ... @javadesktop.org wrote: Ah! Heads up, the catalog.json file within the WonderlandWorldBuilder.war file still had 3X3 so I edited it. Then I compressed it all back together again, and still no soap. It remained 3X3. If you manually edited the catalog.json in WonderlandWorldBuilder.war and re-ran jetty, then I suspect you aren't using that .war. How do you start jetty? I figured the heck with it, and then used alternating red/blue carpet to fill up the room. I added desks, chairs, plants and stuff after. Saved it, and now that room is just empty when I enter it via Wonderland client. Most likely, you are pointing the World builder to a different WFS than the Wonderland server. How do you start jetty? What is the value of wonderland.wfs.root in my.build.properties? Shucks, I forgot to run 'ant' in wonderland-modules, but that didn't fix the youth- carpet . But, I do have my red/blue tiled carpet and furniture intact when I run the client. I'll see what they do with the nightly builds. Ric In /usr/share/jetty6 I run "java -Djava.endorsed.dirs=./endorsed -jar start.jar" That works fine. :) Ric You up late, too? Ric -- ---------------------------------------------------- My father, Victor Moore (Vic) used to say: "There are two Great Sins in the world... ..the Sin of Ignorance, and the Sin of Stupidity. Only the former may be overcome." R.I.P. Dad. Linux user# 44256 Sign up at: http://counter.li.org/ https://nuoar.dev.java.net/ Verizon Cell # 434-774-4987

Wednesday, October 6, 2010

Lead an exciting life

Become a crime solver





====================================================================================================================== Setting this as an integer seems to have resolved it. You'll need to delete the existing winSystemUptime.jrb file so that it can be recreated with integer semantics. For what it's worth, I was collecting this as a counter on my previous installation with no trouble (again, 1.7.0 snapshot from 2 months back) in accordance with http://thread.gmane.org/gmane.network.opennms.general/25688/focus=25698 . That post shows it being collected as a gauge (actually a "guage", but who's counting spelling?). Are you sure you had it as a counter? Hmm, I intended to link to your message two below it: -- On Sep 8, 2008, at 4:49 PM, ( private ) HKS wrote: Whoops, guage should read "gauge" Actually it should read "counter", because despite the MIB definition erroneously declaring it as a Gauge32, it's a monotonically increasing value. The various INFORMANT-* MIBs tend to be full of this particular error. Collecting it as a counter will give the expected behavior in openNMS. -jeff -- After that, I set it to Counter and it worked as expected. When you mentioned that it only stores the delta, that made plenty of sense. I understand why this should be an integer, and hopefully I won't bump into this again... -HKS ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Please read the OpenNMS Mailing List FAQ: http://www.opennms.org/index.php/Mailing_List_FAQ opennms-discuss mailing list To *unsubscribe* or change your subscription options, see the bottom of this page: https://lists.sourceforge.net/lists/listinfo/opennms-discuss