MuseSearch™ Widget Demo #4 – The widget with Bootstrap.js

A new MuseSearch™ widget based on the Responsive CSS Bootstrap technology and jQuery is available. The widget works with Muse version 2.7.0.0 and with the MuseSearch™ Application version 3.9.

In the below example the widget was included in an iFrame element because the EduLib website uses the jQuery UI library which may conflict with the Bootstrap technology.

A simple Bootstrap page that has included the new MSWiget version is available at: https://demo.museglobal.ro/muse/MSWidget/MSWidgetBS.html.

Install files

Installing the MuseSearch™ widget is actually pretty simple. You just need to add the following code to the <head> of your web page:

<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.0/css/bootstrap.min.css">
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css">
<link rel="stylesheet" href="//demo.museglobal.ro/muse/MSWidget/MSWidgetBS.css">

At the bottom of the file include the following files:

<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.0/js/bootstrap.min.js"></script>
<script src="//demo.museglobal.ro/muse/MSWidget/MSWidgetBS.js"></script>

This will make sure all the required files are loaded properly.

Add markup

The HTML markup for the MuseSearch™ widget is also very simple. You simply need to create a <div> with an id (EmbededMSWidget in this case). In this demo we added the MuseSearch™ widget in a jQueryUI "dialog" object. For that we created a <div> with EmbededMSWidgetPosition id in witch the MuseSearch™ will be opened.

<div id="EmbededMSWidget" class="container-fluid" style="height: 600px"></div>

Hook up the widget

Because the access to Muse Web Bridge is CORS protected, if you want to test this widget against our Muse installation on http://demo.museglobal.ro/muse let us know which is your container web page address (protocol://FQDN:port) so that we add it to our CORS ACLs. If you want to test it against your local Muse 2.7.0.0 Q&A system let us know to provide you with an application patch. Finally you need to hook up the widget by adding the following code after the EmbededMSWidget div we created.

<script>
mSWidget.init({
URL: "http://demo.museglobal.ro/muse/servlet/MusePeer", // Server URL
USER_ID: "MSWidget", // User ID
USER_PWD: "lzOoUwTu7E/UvHCw9KpOsmoEbl4=", // User Password
USER_PWD_ENCRIPTION: "SHA1", // User Pasword Encription. Values: "" | "SHA1"

WIDGET_PLACEDOLDER: "#EmbededMSWidget", // The Placeholder where the Widget will be displayed. Values: body | #ID (The ID of a HTML element such as DIV, SPAN, TD, P)

RESULTS_PER_PAGE: 10, // Results per page
RESULTS_PER_SOURCE: 10, // Results per source
USE_PAGINATION: "true", // Use Pagination. Values: "false" | "true". The "false" value means that an infinite scroll is used for the result set list. The "true" value means that the result set list is displayed in multiple pages.
SHOW_PROGRESS: "true" // Show Search Progress in another tab.
})
</script>

Besides the dedicated SharePoint MuseSearch™ WebPart, available since Muse 2.3.0.0, starting with Muse version 2.7.0.0, MuseSearch™ Application 3.9 version, in this post we describe two other simple ways to integrate the MuseSearch™ Application into a SharePoint site.

1. Insert a MuseSearch™ Application into a SharePoint 2013 Site Using the “Page Viewer Web Part”

Based on the tutorial http://community.bamboosolutions.com/blogs/sharepoint-2013/archive/2013/07/30/how-to-insert-a-web-page-onto-a-site-using-the-page-viewer-web-part-for-sharepoint-2013.aspx the actions to include the MuseSearch™ Application are:

1. Edit an existing SharePoint site page or Add a new page and Edit it;
2. On the “Edit Page”, select the “Insert Web Part” tab. From there, select “Media and Content” from “Categories”, and then “Page Viewer” from the “Parts” list;
3. Go to “Edit Web Part” to open the “Tool Pane” and input the Muse address (e.g. http://demo.museglobal.ro/muse/) you wish to display on your site into the “textbox” on the “Page Viewer”;
4. Click to “Apply” and “OK” to finish inserting the MuseSearch™ Application login page.

An example of a SharePoint page with MuseSearch™ Application interface included is depicted below.

2. Insert a MuseSearch™ Widget into a SharePoint 2013 Site Using the “Script Editor Web Part”

Based on the tutorial http://blog.cloudshare.com/2012/10/29/how-to-insert-custom-javascript-code-in-sharepoint-2013-pages-part-i the actions to include the MuseSearch™ Widget are:

1. Edit an existing SharePoint site page or Add a new page end Edit it;
2. On the “Edit Page”, select the “Insert Web Part” tab. From there, select “Media and Content” from “Categories”, and then “Script Editor Web Part”;
3. Click on the “EDIT SNIPPET” link and insert the HTML and JavaScript code for MuseSearch™ Widget just as we have in the https://www.edulib.com/blog/musesearch-widget-demo-2-using-pagination-for-the-result-set-list/ page:

<html>
<head>
<link rel="stylesheet" href="http://demo.museglobal.ro/muse/logon/MuseSearch/skins/redmond/jquery-ui.css"/>
<script src="http://demo.museglobal.ro/muse/logon/MuseSearch/javascripts/jquery.js"></script>
<script src="http://demo.museglobal.ro/muse/logon/MuseSearch/javascripts/jquery-ui.js"></script>
<link rel="stylesheet" href="http://demo.museglobal.ro/muse/MSWidget/MSWidget.css"/>
<script src="http://demo.museglobal.ro/muse/MSWidget/MSWidget.js"></script>
</head>
<body>
<div id="EmbededMSWidget" style="width: 600px; height: 400px;"></div>
<script>
mSWidget.init({
URL: "http://demo.museglobal.ro/muse/servlet/MusePeer", // Server URL
USER_ID: "MSWidget", // User ID
USER_PWD: "lzOoUwTu7E/UvHCw9KpOsmoEbl4=", // User Password
USER_PWD_ENCRIPTION: "SHA1", // User Pasword Encription. Values: "" | "SHA1"
WIDGET_PLACEDOLDER: "#EmbededMSWidget", // The Placeholder where the Widget will be displayed. Values: body | #ID (The ID of a HTML element such as DIV, SPAN, TD, P)
RESULTS_PER_PAGE: 10, // Results per page
RESULTS_PER_SOURCE: 10, // Results per source
USE_PAGINATION: "true" // Use Pagination. Values: "false" | "true". The "false" value means that an infinite scroll is used for the result set list. The "true" value means that the result set list is displayed in multiple pages.
})
</script>
</body>
</html>


An example of a SharePoint page with MuseSearch™ Widget included is depicted below.

We have made a comparison of the features between CERTivity® KeyStores Manager and the most relevant similar products. The features are organized in categories, each category initially showing all features.

Although this comparison was made by EduLib, the creator of CERTivity, we tried to be as objective and fair as possible. If you have any comments or suggestions, do not hesitate to contact us.

You can download this comparison in PDF format also.

Feature NameCERTivity
2.0
Keystore
Explorer
5.0.1
Portecle
1.7
KeyTool
IUI
2.4.1
Released Date2014-01-232013-11-242011-01-232008-10-18
Maintained
PlatformsOn any Platform That Can Run JavaOn any Platform That Can Run JavaOn any Platform That Can Run JavaOn any Platform That Can Run Java
Has bundled JRE
Has installer
KeyStore
Management
Supported Java KeyStore TypesJKS, JCEKS, PKCS12, BKS, BKS-V1, UBERJKS, JCEKS, PKCS12, BKS, UBERJKS, PKCS#12, JCEKS, JKS (case sensitive), BKS, UBER, GKR
(but option is inactive)
JKS, JCEKS, PKCS#12, BKS, UBER
Create a New KeyStore
Open an Existent KeyStore
Open Windows Root CA KeyStore
Open Windows User KeyStore
Discover JREs CA TrustStores
Open JREs CA TrustStoresOnly main JREOnly main JREOnly main JRE
Save a KeyStoreIt's done automatically after some operations
Defining a Default KeyStore(planned for future releases)
Convert KeyStore Type
Change KeyStore Password
Delete Entry
Change Entry Password
Change Entry Alias
Cut/Copy - Paste Single KeyStore EntryAllows only cloning a certificate into the same
KeyStore.
Allows only copying a certificate into the same
KeyStore
Cut/Copy - Paste Multiple Entries
TrustStore
Management
Set/Remove CA Certs TrustStore at runtime without
restarting the application
Set Multiple TrustStores for Trust Path Validation
Availability to use JRE CA Certs TrustStores (from
discovered JREs) for Trust Path Validation
Availability to use Windows KeyStores (for Microsoft
Windows Systems) for Trust Path Validation
(only Windows Root CA)
Availability to use Custom KeyStores for Trust Path
Validation
(only if the CA Certs is changed to a custom one)(only if the CA Certs is changed to a custom one)
Availability to use current opened (and selected) KeyStore
for Trust Path Validation
Display Trust Status for Certificate Entries in
KeyStores
Display Trust Status for Opened Certificates
Customizable Trust Path Validation Options Without
Restarting the Application
Available Trust Path Validation OptionsInhibit any policy, Explicit policy required, Inhibit
policy mapping, Use revocation checking, Use policy qualifier
processing, Use path length constraint (with customizable path
length size), Use custom validation date, Provider selection
(default provider or Bouncy Castle provider)
Interface
Usability
MDI Interface for KeyStores
MDI Interface for Certificates/CRL/CSR
KeyStore RepresentationTree List (Entries are displayed as a list of expandable
nodes) Available SubItems for KeyPairs : Private/Public Keys,
Certificate Chains, Certificates, Extensions Available Subitems
for Certificates: Public Key, Extensions
Simple List (entries are not expandable)Simple List (entries are not expandable)Simple List (entries are not expandable)
Available Entries Direct InformationAlgorithm and Size, Expiry Date, Last Modified, Validity
Status, Trust Status
Algorithm and Size, Expiry Date, Last Modified, Validity
Status
Alias Name, Last ModifiedFor Key Pairs and Certificates: Alias, Entry Type, Valid
Date, Self-Signed, Trusted C. A., Key Size, Cert. Type, Cert. Sig.
Algorithm, Modified Date For Secret Keys: Alias, Entry, Modified
Date
Mark Locked Keys
Mark Expired Key Pairs/Certificates
Mark Certificate Trust Status
Mark Key Pairs with Key sizes smaller than a configurable
value
Undo/Redo for KeyStore Operations and Imports
Prompting to re-enter password in case of wrong password
for unlocking Private/Secret Keys
Prompting to re-enter password in case of wrong password
when converting a KeyStore to a different type (operation does not
fail)
Informing when a Key Store which contains Secret Keys can
not be converted to a Key Store type that does not support Secret
Keys before entering all the passwords
Converts with removing secret keys (it gives a slight
warning first)
Prompting for passwords when converting from a KeyStore
type which does not support passwords to a KeyStore type which
supports entry passwords
Displaying Entry Information ModeBottom Panel (And few details in the KeyStore View)New Dialog (And few details in the KeyStore View)New Dialog (and few details in the KeyStore View)New Dialog (text based content)
Allows rearranging Key Store/Certificate tabs
Configurable Arrangement and Positioning of Tabs
Configurable Tabs Position by Dragging
Window Configuration OptionsMaximize, Float, Float Group, Minimize, Minimize Group,
Dock, Dock Group, New Document Tab Group, Collapse Document Tab
Group
Multiple KeyStore Entries Selection
Multiple KeyStore Entries Copy - Paste between
KeyStores
Copy a Certificate From a Certificate Chain and Paste It
Into Another KeyStore
Configurable Key Shortcuts (Keymap)
Displaying Providers List(planned for future releases)
"Close All Documents" Option
Opened Tabs Manager
Opened Tabs Manager OptionsSwitch to Document, Close Document(s)
Easy Tab Selector Drop list
Available Actions/Options Tree Like Structure(planned for future releases)
Quick Search (with text box)
Change Look And Feel(planned for future releases)
Password Strength Indicator(planned for future releases)
Show tips at startup(planned for future releases)
Key Pair
Operations
Generate Key Pair (RSA/DSA)
Regenerate Key Pair
Sign With Selected KeyPair at Generation Time
Key Pair Generation - Signature Algorithms (for DSA
Keys)
SHA1 With DSA, SHA224 With DSA, SHA 256 With DSA, SHA 384
With DSA, SHA 512 With DSA
SHA.1 with DSA, SHA-224 with DSA, SHA-256 With DSA, SHA-384
with DSA, SHA-512 with DSA
SHA1withDSA, SHA224withDSA, SHA256withDSASHA1withDSA
Key Pair Generation - Signature Algorithms (for RSA
Keys)
MD2 with RSA, MD5 with RSA, SHA1 with RSA, SHA1 With RSA
and MGF1, SHA224 With RSA, SHA224 With RSA and MGF1, SHA256 With
RSA, SHA256 With RSA and MGF1, SHA384 With RSA, SHA384 With RSA
and MGF1, SHA512 With RSA, SHA512 With RSA and MGF1, RIPEMD128
With RSA, RIPEMD160 With RSA, RIPEMD256 With RSA
MD2 with RSA, MD5 with RSA, RIPEMD-128 with RSA, RIPEMD-160
with RSA, RIPEMD-256 with RSA, SHA.1 with RSA, SHA-224 with RSA,
SHA-256 With RSA, SHA-384 with RSA, SHA-512 with RSA
MD2withRSA, MD5withRSA, SHA1withRSA, SHA224withRSA,
SHA256withRSA, SHA384withRSA, SHA512withRSA, RIPEMD128withRSA,
RIPEMD160withRSA, RIPEMD256withRSA
MD5withRSA, SHA256withRSA, SHA384withRSA, SHA512withRSA,
RIPEMD128withRSA, RIPEMD160withRSA, RIPEMD256withRSA
Generate Key Pair (EC)
Key Pair Generation - EC AlgorithmsEC(ECDSA), ECGOST3410EC(ECDSA)
Key Pair Generation - EC Parameters Specification (for
ECDSA Algorithm)
c2pnb272w1, c2tnb191v3, c2pnb208w1, c2tnb191v2, c2tnb191v1,
c2tnb359v1, prime192v1, prime192v2, prime192v3, c2tnb239v3,
c2pnb163v3, c2tnb239v2, c2pnb163v2, c2tnb239v1,, c2pnb163v1,
c2pnb176w1, prime256v1, c2pnb304w1, c2pnb368w1, c2tnb431r1,
prime239v3, prime239v2, prime239v1, sect233r1, secp112r2,
secp112r1, secp256k1, sect113r2, secp521r1, sect113r1, sect409r1,
secp192r1, sect193r2, sect131r2, sect193r1, sect131r1, secp160k1,
sect571r1, sect283k1, secp384r1, sect163k1, secp256r1, secp128r2,
secp128r1, secp224k1, sect233k1, secp160r2, secp160r1, sect409k1,
sect283r1, sect163r2, sect163r1, secp192k1, secp224r1, sect239k1,
sect571k1, B-163, P-521, P-256, B-233, P-224, B-409, P-384, B-283,
B-571, P-192, brainpoolp512r1, brainpoolp384t1, brainpoolp256r1,
brainpoolp192r1, brainpoolp512t1, brainpoolp256t1,
brainpoolp224r1, brainpoolp320r1, brainpoolp192t1,
brainpoolp160r1, brainpoolp224t1, brainpoolp384r1,
brainpoolp320t1, brainpoolp160t1
prime192v1, prime239v1, prime256v1
Key Pair Generation - EC Parameters Specification (for
ECGOST3410 Algorithm)
GostR3410-2001-CryptoPro-A, GostR3410-2001-CryptoPro-XchB,
GostR3410-2001-CryptoPro-XchA, GostR3410-2001-CryptoPro-C,
GostR3410-2001-CryptoPro-B
Key Pair Generation - Signature Algorithms (for ECDSA EC
Keys)
SHA1withECDSA, SHA224withECDSA, SHA256withECDSA,
SHA384withECDSA, SHA512withECDSA
SHA1withECDSA,, SHA224withECDSA, SHA256withECDSA,
SHA384withECDSA, SHA512withECDSA
Key Pair Generation - Signature Algorithms (for ECGOST3410
EC Keys)
GOST3411 with ECGOST3410
Key Pair Generation CERT X.500 DN FieldsCommon Name (CN), Organization Unit (OU), Organization (O),
Locality (L), State (ST), Country (C), Email (E)
Common Name (CN), Organization Unit (OU), Organization (O),
Locality (L), State (ST), Country (C), Email (E)
Common Name (CN), Organization Unit (OU), Organization (O),
Locality (L), State (ST), Country (C), Email (E)
Common Name (CN), Organization Unit (OU), Organization (O),
Locality (L), State (ST), Country (C), Email (E)
Standardized DN Country Codes (2 letter code)
support
Key Pair Generation CERT X.500 DN Fields (extended)(planned for future releases)Title, Device serial number name, Business category, DN
qualifier, Pseudonym, 1-letter gender, Name at birth, Date of
birth, Place of birth, Street, Postal code, Postal address,
2-letter country of residence, 2-letter country of
citizenship
Key Pair Generation CERT X.520 Name(planned for future releases)Surname, Given name, Initials, Generation, Unique
Identifier
Import Key Pair into KeyStore (from PKCS#12 Files)
Import Key Pair into KeyStore (from PKCS#8 private key and
Certificate)
Import Key Pair into KeyStore from OpenSSL private key and
certificate)
Import Key Pair into KeyStore (from PVK private key and
Certificate)
(planned for future releases)
Import Key Pair into KeyStore (from PEM private key and
Certificate Chain)
Import Key Pair into KeyStore (from other KeyStore)(planned for future releases)
Import Key Pair into KeyStore from a Private Key and More
Certificate Files (which can create a chain)
Export Key Pair (PKCS#12)
Export Key Pair (PEM Encoded)(planned for future releases)
Extend Validity of Self-Signed KeyPairs
Enter New Serial Number When Extending Validity of
Self-Signed Certificates
Certificates
Operations
Open a standalone certificate/Examine standalone
certificate
Open a Certificate Chain/Examine Certificate Chain
View Certificate Details
View Certificate Details From Signature(only Certificate Type and Subject DN, for each signed
entry, for JAR files)
Available Certificate DetailsFormat, Version, Serial Number, Valid From/To, Public Key,
Extensions, Signature Algorithm, Multiple Fingerprints,
Subject/Issuer Information (CN, OU, O, L, ST, C, E), PEM,
ASN.1
Version, Serial Number, Valid From/Until, Public Key,
Signature Algorithm, Multiple Fingerprints, Subject/Issuer
Information (CN, OU, O, L, ST, C, E), Extensions, PEM,
ASN.1
Chain position and total number of certificates in the
chain, Version, Serial Number, Valid From/Until, Public Key,
Signature Algorithm, Fingerprints, Subject/Issuer DN String,
Extensions, PEM Encoding
Owner (Subject DN String), Issuer (Issuer DN String),
Version, Serial Number, Valid From/Until, Signature Algorithm,
Fingerprints, Extensions
Available FingerprintsMD2, MD4, MD5, SHA1, RIPEMD-128, RIPEMD-160, RIPEMD-256,
SHA-224, SHA-256, SHA-384, SHA-512
MD2, MD4, MD5, RIPEMD-128, RIPEMD-160, RIPEMD-256, SHA-1,
SHA-224, SHA-256, SHA-384, SHA-512
SHA1, MD5MD5, SHA.1
View PEM Representation for a Certificate
View ASN.1 for a Certificate
Import Certificate from files into KeyStore
Import Root CA Certificate (directly into the Root CA certs
KeyStore)
(planned for future releases)
Import Certificate into a KeyStore directly from
Certificate Details Dialog
(planned for future releases)
Import Certificate into KeyStore with trust path
validation
(manual validation)
Import Certificate from Server into KeyStore
Import Certificate from Signature into KeyStore
Export Certificate
Export Certificate From Signature to file (JAR, APK, PDF,
XML)
Export Certificate Supported FormatsX.509, X.509 PEM Encoded, PKCS#7, PKCS#7 PEM Encoded, PKI
Path
X.509, X.509 PEM Encoded, PKCS#7, PKCS#7 PEM Encoded, PKI
Path, SPC
DER Encoded, PEM Encoded, PKCS#7, PkiPathDER, PKCS#7, PEM
Export Certificate Chain(only when exporting with private key also)
Export Certificate Chain Supported FormatsPKCS#7, PKCS#7 PEM Encoded, PKI PathPKCS#7, PKCS#7 PEM Encoded, PKI PathPKCS#7, PkiPathDER, PEM
Obtain the Revocation Status
Retrieve Certificate From SSL ServerTLSv1, TLS v1.1, TLS v1.2 and default algorithmTLSv1, TLS v1.1, TLS v1.2TLSv1 (SSLv 3.1)
Retrieve Certificate From SSL Server (additional connection
info)
(planned for future releases)Connection Protocol, Connection Cipher Suite
Retrieve Certificate From SSL Server using HTTPS URL (not
host and port specifically)
Test Certificates on Given Protocol
View Associated CRL
Append signer certificate to key pair certificate
chains
Remove signer certificate from key pair certificate
chains
Rename Certificate
Delete Certificate
Renewal of CertificateOnly when the certificate is within a Key Pair
Certificate
Extensions
View Certificate Extensions
View ASN.1 for a Certificate Extension
Add Certificate Extensions when generating a new
KeyPair
Add Certificate Extensions to CA Replies
Save Certificate Extensions Template
Save Certificate Extensions Template as XML
Available Certificate ExtensionsAuthority Information Access, Authority Key Identifier,
Basic Constraints, Certificate Policies, CRL Distribution Points,
Extended Key Usage, Freshest CRL, Inhibit Any Policy, Issuer
Alternative Name, Key Usage, Name Constraints, Netscape Cert Type,
Private Key Usage Period, Policy Constraints, Policy Mappings,
Subject Alternative Name, Subject Information Access, Subject
Directory Attributes, Subject Key Identifier.
Authority Information Access, Authority Key Identifier,
Basic Constraints, Certificate Policies, Extended Key Usage,
Inhibit Any Policy, Issuer Alternate Name, Key Usage, Name
Constraints, Netscape Base URL, Netscape CA Policy URL, Netscape
CA Revocation CRL, Netscape Certificate Renewal URL, Netscape
Certificate Type, Netscape Comment, Netscape Revocation URL,
Netscape SSL Server Name, Policy Constraints, Policy Mappings,
Private Key Usage Period, Subject Alternative Name, Subject
Information Access, Subject Key Identifier
(many only for display, but not specified anywhere)(many for display) For Key Pair Creation: Key Usage,
Extended Key Usage
Extensions display at creation time (GUI Point of
view)
Tree - like Structure where all extensions, properties and
sub-items are visible in a single dialog
List of extensions, each one opening in a different dialog
for setting properties, and each sub-item opens also in a
different dialog
Certificate Authority
Functions
Check PKI file type
Certificate Signing made easier using “Select as CA Issuer”
and “Sign Certificate by ” actions
Certificate chain management: append and remove signer
certificate (with Copy/Paste/Delete/Undo/Redo functionality
included)
(supported only from menu without Copy/Paste)
Generate Certificate Signing Request (CSR) files
Sign Certificate Signing Request (CSR) files
Import CA Reply
Trust verification when Importing CA Reply
Trust verification when Importing CA Reply (with user
confirmation when trust is not established)
Act as a testing purposes CA (by generating CSR files,
signing CSRs and importing CA Replies
CSR
View CSR Details/Examine CSR(only PEM display)
Available CSR DetailsFormat, Version, Public Key (with details available),
Signature Algorithm, Subject (CN, OU, O, L, ST, C, E), Challenge,
CSR Dump (PEM)
Format, Public Key (with details available), Signature
Algorithm, Subject (CN, OU, O, L, ST, C, E), Challenge, CSR Dump
(PEM, ASN.1)
Version, Subject DN String, Public Key (Algorithm and
size), Signature Algorithm, PEM
PEM
Generate CSR Files
Generate CSR Files Supported FormatsPKCS#10, SPKACPKCS#10, SPKACPKCS#10 (probably)PKCS#10
CRL
View CRL Details/Examine CRL
View Remote CRLs
Protocols Supported for Opening Remote CRLsHTTP, HTTPS, FTP, LDAP
Available CRL DetailsType, Version, This Update, Next Update, Signature
Algorithm, Issuer (CN, OU, O, L, ST, C, E), Extensions, ASN.1,
Revoked Certificates (+Extensions)
Version, Issuer (CN, OU, O, L, ST, C, E), Effective Date,
Next Update, Signature Algorithm, Extensions, ASN.1, Revoked
Certificates (+Extensions)
Version, Issuer DN String, Effective Date, Next Update,
Signature Algorithm, Extensions, Revoked Certificates
(+Extensions)
View CRL Extensions
Next Update Exeeded Verification
CA Reply
Import CA Reply With Trust Path Validation
View CA Reply Details(Only if opened as a certificate and browse through the
chain)
(Only if opened as a certificate, and browse through the
chain)
(Only if opened as a certificate, and you can browse
through the chain)
Create CA Reply
Secret Key
Operations
Available Secret Keys InformationAlgorithm, Last ModifiedAlgorithm, Key Size, Last ModifiedLast ModifiedModified date
View Secret Key Details(planned for future releases)Algorithm, Format, Size, Value in hexa
Generate Secret Key
Secret Key Supported AlgorithmsAES, AESWrap, ARCFOUR, BlowFish, Camellia, Cast5, Cast6,
DES, DESede, DESedeWrap, GOST28147, Grainv1, Grain128, HC128,
HC256, Noekeon, RC2, RC4, RC5, RC5-64, RC6, Rijndael, Salsa20,
Seed, Serpent, Skipjack, TEA, Twofish, VMPC, VMPC-KSA3, XTEA,
HmacMD2, HmacMD4, HmacMD5, HmacRIPEMD128, HmacRIPEMD160, HmacSHA1,
HmacSHA224, HmacSHA256, HmacSHA384, HmacSHA512, HmacTIGER
AES, ARC4, Blowfish, Camellia, CAST-128, CAST-256, DES,
DESEDE, GOST 28147-89, Grain v1, Grain-128, HC-128, HC-256,
HMac-MD2, HMac-MD4, HMac-MD5, HMac-RipeMD128, HMac-RipeMD160,
HMac-SHA1, HMac-SHA224, HMac-SHA256, HMac-SHA384, HMac-SHA512,
HMac-Tiger, NOKEON, RC2, RC5, RC6, Rijndael, Salsa20, Serpent,
SEED, Skipjack, TEA, Twofish, XTEA
AES, ARCFOUR, Blowfish, DES, DESede, HmacMD5, HmacSHA1,
HmacSHA256, HmacSHA384, HmacSHA512, RC2
Provider Selection for Generation Available
Offers Supported Key Sizes for Each Algorithm
Import Secret Key From File(planned for future releases)
Export Secret Key To File(planned for future releases)
Export Secret Key To File Format(planned for future releases)DER, PEM
Private Key
Operations
View Private Key Details
Available Private Key Details (for DSA)Algorithm, Key Size, Fields (Basic Generator G, Prime
Modulus P, SubPrime Q, Private Key Value; ), ASN.1
Algorithm, Key Size, Fields (Prime Modulus P, Prime Q,
Generator G, Secret Exponent X), ASN.1
Key Size
Available Private Key Details (for RSA)Algorithm, Key Size, Fields (Modulus, Private Exponent,
Public Exponent, CRT Coefficient, Prime Exponent P, Prime Exponent
Q, Prime Modulus P, Prime Q), ASN.1
Algorithm, Key Size, Format, Encoded, Fields (Public
Exponent, Modulus, Prime P, Prime Q, Prime Exponent P, Prime
Exponent Q, CRT Coefficient, Private Exponent), ASN.1
Key Size
Available Private Key Details (for ECDSA /
ECGOST3410)
Algorithm, Key Size, Parameters Specification, Fields
(Private Value S, Cofactor, First Coefficient A, Second
Coefficient B, Field Size, Seed, Generator Affine X-Coordinate,
Generator Affine Y-Coordinate, Generator Order), ASN.1
Algorithm, Key Size (for ECDSA only), Format, Encoded,
ASN.1
Key Size (for ECDSA only)
Export Private Key(but only together with certificate file)
Export Private Key Supported FormatsPKCS#8, PKCS#8 PEM Encoded, Open SSL PEM EncodedPKCS#8, PKCS#8 PEM Encoded, PVK, OpenSSL PEM
Encoded
DER, PEM
Export Private Key Encryption Algorithms (PKCS#8)PBE_SHA1_2DES, PBE_SHA1_3DES, PBE_SHA1_RC2_40,
PBE_SHA1_RC2_128, PBE_SHA1_RC4_40, PBE_SHA1_RC4_128
PBE with SHA.1 and 2 key DESede, PBE with SHA.1 and 3 key
DESede, PBE with SHA.1 and 40 bit RC2, PBE with SHA.1 and 128 bit
RC2, PBE with SHA.1 and 40 bit RC4, PBE with SHA.1 and 128 bit
RC4
Export Private Key Encryption Algorithms (OpenSSL)AES-128-CBC, AES-128-CFB, AES-128-ECB, AES-128-OFB, BF-CBC,
BF-CFB, BF-ECB, BF-OFB, DES-CBC, DES-CFB, DES-ECB, DES-EDE-CBC,
DES-EDE-CFB, DES-EDE-ECB, DES-EDE-OFB, DES-EDE, DES-EDE3-CBC,
DES-EDE3-CFB, DES-EDE3-ECB, DES-EDE3-OFB, DES-EDE3, DES-OFB,
RC2-40-CBC, RC2-64-CBC, RC2-CBC, RC2-CFB, RC2-ECB, RC2-OFB
PBE with DES CBC, PBE with DESede CBC, PBE with 128 but AES
CBC, PBE with 192 bit AES CBC, PBE with 256 bit AES CBC
Public Key
Operations
View Public Key Details
Available Public Key Details (for DSA Keys)Algorithm, Key Size, Fields (Basic Generator G, Prime
Modulus P, SubPrime Q, Public Key), ASN.1
Algorithm, Key Size, Format, Encoded, Fields (Prime Modulus
P, Prime Q, Generator G, Public Key Y), ASN.1
Available Public Key Details (for RSA Keys)Algorithm, Key Size, Fields (Modulus, Public Exponent),
ASN.1
Algorithm, Key Size, Format, Encoded, Fields (Public
Exponent, Modulus), ASN.1
Available Public Key Details (for ECDSA / ECGOST3410
Keys)
Algorithm, Key Size, Fields (Basic Generator G, Prime
Modulus P, SubPrime Q, Public Key), ASN.1
Algorithm, Key Size, Format, Encoded, ASN.1
Export Public Key
Export Public Key Supported FormatsOpen SSL, Open SSL PEM EncodedOpen SSL, Open SSL PEM Encoded
Sign and Verify
Verify Signatures for JAR Files
Verify Signatures for APK Files
Verify Signatures for PDF Files
Verify Signatures for XML Files
Verify XML Signature - allow using external cert.
validation
Verify XML Signature - set use external cert. validation
and embedded cert. validation order
Verify XML Signature - allow selecting the external cert.
from file or from a given KeyStore entry (from KeyStore
file)
Sign JAR Files
JAR Signing - Signature AlgorithmsSHA.1 with DSA, MD2 with RSA, MD5 with RSA, SHA.1 with RSA,
SHA.1 with ECDSA
SHA.1 with DSA, MD2 with RSA, MD5 with RSA, SHA.1 with
RSA
SHA.1 With DSA, SHA.1 With RSA
JAR Signing - Digest AlgorithmsMD2, MD5, SHA.1, SHA224, SHA256, SHA384, SHA512MD2, MD5, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512SHA.1
JAR Signing - Add Full Manifest Digest Attribute
Configurable
Sign APK Files
APK Signing - Signature AlgorithmsSHA.1 with DSA, MD2 with RSA, MD5 with RSA, SHA.1 with
RSA
SHA.1 with DSA, MD2 with RSA, MD5 with RSA, SHA.1 with
RSA
APK Signing - Digest AlgorithmsMD2, MD5, SHA.1, SHA224, SHA256, SHA384, SHA512MD2, MD5, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
APK Signing - Add Full Manifest Digest Attribute
Configurable
Sign XML Files
XML Signing - Signature TypesEnveloped, Enveloping, DetachedEnveloped
XML Signing - Digest AlgorithmsSHA1, SHA256, SHA512
XML Signing - Canonicalization AlgorithmsInclusive, Inclusive With Comments, Exclusive, Exclusive
With Comments
XML Signing - Allow Attaching Key To Signature
XML Signing - Allow Attaching Certificate To
Signature
Sign PDF Files
PDF Signing - Signature Subfiltersadbe.pkcs7.sha1, adbe.x509.rsa_sha1,
adbe.pkcs7.detached
Sign CSR Files/Create Certificate from CSR
Prevention for Signing CSR Files by the Same Key Pair That
Created Them
CSR Signing - Signature AlgorithmsSHA.1 with DSA, SHA224 with DSA, SHA256 with DSA, SHA384
with DSA, SHA512 with DSA, MD2 with RSA, MD5 with RSA, SHA.1 with
RSA, SHA.1 with RSA and MGF1, SHA224 with RSA, SHA224 with RSA and
MGF1, SHA256 with RSA, SHA 256 with RSA and MGF1, SHA384 with RSA,
SHA 384 with RSA and MGF1, SHA512 with RSA, SHA512 with RSA and
MGF1, RIPEMD128 with RSA, RIPEMD160 with RSA, RIPEMD256 with
RSA
SHA.1 with DSA, SHA-224 with DSA, SHA-256 With DSA, SHA-384
with DSA, SHA-512 with DSA, MD2 with RSA, MD5 with RSA, RIPEMD-128
with RSA, RIPEMD-160 with RSA, RIPEMD-256 with RSA, SHA.1 with
RSA, SHA-224 with RSA, SHA-256 With RSA, SHA-384 with RSA, SHA-512
with RSA
Sign J2ME MIDlet Applications Files
Verify Detached Signature - CMS(planned for future releases)
Sign With Detached Signature - CMS(planned for future releases)
Detached Signature - CMS Formats - CMS Signature
File
(planned for future releases)P7M, P7S
Detached Signature - CMS Formats - CMS Certs-only
file
(planned for future releases)P7C
Detached Signature - CMS Formats - digest
algorithms
(planned for future releases)SHA1, SHA224, SHA256, SHA384, SHA512, MD5, RIPEMD128,
RIPEMD160, RIPEMD256
Verify Detached Signature - Other(planned for future releases)
Sign Detached Signature - Other(planned for future releases)
Detached Signature - Other Formats - Signature File(planned for future releases)DER, PKCS#7, PEM
Detached Signature - Other Formats - Certificate
File
(planned for future releases)DER, PKCS#7, PEM
Allow signing using any Key Pair irrespective of
Certificate extension
Suggest candidate KeyPairs for signing (the ones that have
the right extensions for their certificates)
(planned for future releases)
Encrypting Files
Encrypt file using Secret Key
Encrypt file using RSA trusted certificate
Encrypt file using private key
RSA Encryption AlgorithmsRSA/ECB/PKCS1Padding, RSA/NONE/PKCS1Padding,
RSA/NONE/OAEPWithSHA1 AndMGF1Padding
Other
KeyStore Persistence between sessions
KeyStore Persistence typeFully persist (name and passowrd), Only KeyStore names, No
persistence
Open Files Using Drag & Drop
File Types Supported For Drag & DropKeyStore, Certificate, CSR, CRL irrespective of the file
extension
KeyStoreOnly based on extension: KeyStore, Certificate, CSR,
CRL
Supported KeyStore file extensionscacerts, ks, jks, jce, p12, pfx, bks, ubr, keystoreks, keystore, jks, jceks, bks, uber, pfx, p12ks, jks, jceks, p12, pfx, bks, cacertsubr, jks, ks, jce, bks, pfx, p12
Open Recent Files(maximum 4 files)
Remember last file directory between sessions
Remember last file directory for each specific action
(Opening a Key Store, a Certificate, etc.)
KeyStore Properties (Tree - like entries
structure)/KeyStore Report
(planned for future releases)
KeyStore Properties - Export structure in text and XML
formats)
(planned for future releases)(copy in memory)
Set Password Quality(planned for future releases)
Configure/Set Internet Proxy(planned for future releases)
View Cryptography Strength/Policy Details
Detection of Cryptography Strength Policy Limitation when
Launching the Application
(planned for future releases)
GUI Support for Upgrading Cryptography Strength
Support for Manual Upgrading Cryptography Strength in case
automatic upgrade fails
Customizable PropertiesCertificate expiry notification interval, RSA Key Pair
minimum allowed size, RSA Key Pair maximum allowed size, RSA Key
Pair default size, Autogenerated certificate serial number maximum
bit length, Undo level, Log level, Memory usage maximum threshold
level, Keystore persistence type, Recent file list maximum size,
JRE CA KeyStore list max size, Certificates Retriever connection
type, Inspected and draggable file size limit
Set CA Certificates Key Store, Minimum Password Quality,
Look And Feel, Internet Proxy, Trust Checks
Import/Export Configuration Properties
Add extension to file name on export, if the name does not
contain an extension from the selected file filter
Password Manager (remember passwords after
unlocking)
Archiving directories into JAR/APK files(planned for future releases)
OS File Associations(planned for future releases)(only for KeyStores)

The Muse manuals are written in DocBook and a build process generates the PDF files which are actually the documents for the end users. We have improved the building process of Muse manuals so that it does not hang up forever. We used a workaround, more details about this and what the issues are for reference, after explaining the steps involved.

Basically each document is built through a call to the Ant script ${MUSE_HOME}/doc/tools/build.xml where the FOP is launched in Java task using fork = true(i.e. another process). The FOP was blocking here sometimes when many manuals are run in a row – this happens when invoking ${MUSE_HOME}/buildall.xml doc which calls through all the other ants and finally reaches this FOP.

We have ended up adding a timeout to the Java process. We have added a big enough timeout value (15 minutes) that a single manual cannot take that long. Upon killing the Java process the return value is -1. Previously the return value was tested to see if it is 0 (OK) or non-zero (wrong). Upon non-zero it was considered a FOP error and the whole process stopped. We have added a test against -1, and when -1 is detected we output this:


Warning: FOP probably timed out (15 minutes of execution), due to the JAI infinite looping bug.
Although it should be unaffected inspect the PDF document ${docName}.pdf.

FOP should not be returning the value -1 upon various errors. We have checked the exact FOP source code for this and it can only return 0 (on success), 1 (a File not found) or 2 (a real FOP issue). But for covering everything, the Release Manager should inspect the manuals, as well as actually run through all the build log for any other operation. Note that every time ${MUSE_HOME}/buildall.xml is called, a log file ${MUSE_HOME}/buildall.log is created (overwritten). So, this file should be inspected on all the steps for unusual errors.

This timeout scenario will be recognized as below:

[echo] Converting Muse Designer Console.xml
DocBook2PDFSingle:
[INFO] Using org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser as SAX2 Parser

[WARNING] Zero-width table column!
[WARNING] Zero-width table column!
[INFO] Parsing of document complete, stopping renderer
Timeout: killed the sub-process
Java Result: -1
[echo] Warning: FOP probably timed out (15 minutes of execution), due to the JAI infinite looping bug.
Although it should be unaffected inspect the PDF document Muse Designer Console.pdf.

FOP (actually Java Advanced Images – JAI) hangs upon JVM termination so we will always see that FOP has done its job:

Parsing of document complete, stopping renderer.

This is the state the Release Manager was founding the building process, and here he stopped (^C) the ant process. Now it will take up to 15 minutes (or what was left of them) and the Java process is killed by timeout and the above messages are an indication for this.

If one is following live the building process and sees it stalled in:

Parsing of document complete, stopping renderer.

he can do the usual stopping to avoid waiting up to 15 minutes. If the process is in the background it will do its job. We have tested this and it actually blocked 2 times and the total amount of time spent for building all the manuals was 71 minutes. For a background process this is fine. Usually it happened that manuals were stalled for hours there because the Release Manager was doing other tasks in parallel so this would be an improvement. As well for the Muse Control Center task that weekly builds the latest manuals. It should not hang up as before.

——–

Now about how a set of bugs and “features” of open source components could chain each other making almost impossible to achieve a reliable environment.

So, we had this bug of Muse Manuals hanging. Based on the observation CPU was 50% (as there are two units). A stack trace revealed that the JAI codec was doing some clean-up and intermittently filling up a stack trace…and JVM could not stop…


Thread-0? prio=2 tid=0×03088000 nid=0xee0 runnable [0x03fbf000]
java.lang.Thread.State: RUNNABLE
at java.lang.Throwable.fillInStackTrace(Native Method)
- locked <0×06cf3958> (a java.util.ConcurrentModificationException)
at java.lang.Throwable.<init>(Throwable.java:181)
at java.lang.Exception.<init>(Exception.java:29)
at java.lang.RuntimeException.<init>(RuntimeException.java:32)
at java.util.ConcurrentModificationException.<init>(ConcurrentModificationException.java:57)
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
at java.util.HashMap$KeyIterator.next(HashMap.java:828)
at com.sun.media.jai.codec.TempFileCleanupThread.run(FileCacheSeekableStream.java:323)
Locked ownable synchronizers:
- None
“DestroyJavaVM” prio=6 tid=0×002b7400 nid=0×8a0 in Object.wait() [0x0090f000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
– waiting on <0×0b71ad30> (a com.sun.media.jai.codec.TempFileCleanupThread)
at java.lang.Thread.join(Thread.java:1143)
- locked <0×0b71ad30> (a com.sun.media.jai.codec.TempFileCleanupThread)
at java.lang.Thread.join(Thread.java:1196)
at java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:79)
at java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:24)
at java.lang.Shutdown.runHooks(Shutdown.java:79)
at java.lang.Shutdown.sequence(Shutdown.java:123)
at java.lang.Shutdown.shutdown(Shutdown.java:190)
- locked <0×2b79e3e0> (a java.lang.Class for java.lang.Shutdown)
Locked ownable synchronizers:
- None

Now we have jumped into to try to see the whole DocBook related process. There is a hierarchy of Ant builds from the global buildall.xml to buildall.xml of each project, to build of each sub-project and finally calling ${MUSE_HOME}/doc/tools/build.xml where this line was invoked


<java classname=”org.apache.fop.apps.Fop” fork=”true” failonerror=”false” maxmemory=”512m” resultproperty=”errorCode” timeout=”900000?>
<sysproperty key=”MUSE_HOME” value=”${MUSE_HOME}”/>
<!--...-->
</java>

We were trying to read the FOP documentation. We do not use the recommended way (http://xmlgraphics.apache.org/fop/0.95/anttask.html), that is, using the Ant Task and we were calling the command line from an external process. We considered doing this (using the FOP Ant task), although we had a very very old FOP (fop-0.20.5). However we needed to split what the command line was able to do and the Ant task was not into two operations. But that is the recommended way. Namely first create the *fo file, through an XSLT and then run the FOP converter. The command line was running the XSLT inside and without the need of a temporary file.

We considered doing this because we were thinking in the direction of switching to FOP 1.0 in the future, which actually accepts the XML dockbook as parameter and not the FO. Also we were thinking there could be improvements to speed because we could be running in the future multiple docs through a file set, without the need to step into each document. Considering this we tried using the documented way of using FOP in Ant and…we ran into a PermGen issue. After increasing it to 256 MB we still get OOM PermGen. We considered then that this is due to the recommended way of doing a task def…


<taskdef classname="org.apache.fop.tools.anttasks.Fop">
<classpath>
<fileset dir="${fop.home}/lib">
<include/>
</fileset>
<fileset dir="${fop.home}/build">
<include/>
<include />
</fileset>
</classpath>
</taskdef>

We had many classpath elements not few as the example above, and although this task def was in the context of the build script for the manual finished with each manual, still persist in Ant. This is … an ANT bug:

“Sub-builds (antcall, subant) load a task each time the task is defined, but do not release it when the sub-build project completes.[...]“
[https://issues.apache.org/bugzilla/show_bug.cgi?id=49021]

Following the workarounds mentioned we ended up trying to define the FOP task at the upper level and at the lowest level we tested if it is defined and only then we were defining it. To make it simpler we used the recommended Antlib approach. We ended up with this in the ${MUSE_HOME}/doc/tools/build.xml


<condition property=”alreadyDefined” value=”true” else=”false”>
<typefound name=”antlib:org.apache.fop.tools.anttasks.Fop:fop”/>
</condition>
<echo message=”alreadyDefined:${alreadyDefined}”/>
<if value=”false”>
<echo message=”redifining”/>
<taskdef uri=”antlib:org.apache.fop.tools.anttasks.Fop” resource=”antlib.xml” classpath=”${MUSE_HOME}/doc/tools/fop/lib/”/>
</if>
<!--...-->
<fop:fop format=”application/pdf” userConfig=”${MUSE_HOME}/doc/tools/fop/conf/userconfig.xml”
basedir=”${docBaseDir}/${docName}” fofile=”${temp.fopFile}” outfile=”${docBaseDir}/${docName}.pdf”/>

“basedir Base directory to resolve relative references (e.g., graphics files) within the FO document. No, for single FO File entry, default is to use the location of that FO file.”
[http://xmlgraphics.apache.org/fop/0.95/anttask.html]

Hence we used basedir and were happy that it worked. Another run, forgot a document was opened, the whole process needed to be revert, run again…finished in about 60 minutes…forgot to mention that we left the ANT_OPTS on the build machine to use more memory (-Xmx896M -XX:MaxPermSize=128M). This should make all the building process take a little less.

Then we were happy that the images appeared in the docs. But we said we should be comparing various new rendered PDFs with old ones. Many of them had the same dimension, but there were others with differences. Muse Testing.pdf (one containing many images) was smaller than the one from CVS with 1 MB.
By looking into it we saw images from Muse Designer Console.pdf and from ICE MARC to XML Converter.pdf. Well we said it has to do with some cache, because now there is a single instance of FOP to the entire run…and, indeed that was it, actually a feature of FOP:

“FOP caches images between runs. There is one cache per FopFactory instance. The URI is used as a key to identify images which means that when a particular URI appears again, the image is taken from the cache. If you have a servlet that generates a different image each time it is called with the same URI you need to use a constantly changing dummy parameter on the URI to avoid caching.”
[http://xmlgraphics.apache.org/fop/1.0/graphics.html#caching]

At this point we said stop to this whole chain as it wasn’t worth. Although we are keeping, for reference, the work done, so that when these issues may be resolved in the future we use the recommended way, adding a dummy parameter (hadn’t done this anyway) or trying to use the name of the file in the image or even mentioning the directory (which is the document name) means many modifications, and also in case a document name change is necessary then there are too many troubles in modifying it. So, using different file names for images should never be considered.

Meanwhile, while doing this by following the right documentation, we came across the timeout parameter of the Java fork, and had always had this workaround in mind. But we were curios why that JAI (Java Advanced Imaging) blocks there in the stack trace from above, and actually found that this is a JAI bug, not resolved in the latest version either. The bug is detailed here:

“The shutdownHook TempFileCleanupThread throws a ConcurrentModificationException in fileIter.next() sometimes. This exception is ignored and because .next() doesn’t succeed the loop never ends. It is about this code in com/sun/media/jai/codec/FileCacheSeekableStream.java:”

/**
* Deletes all <code>File</code>s in the internal cache.
*/
public void run() {
if(tempFiles != null && tempFiles.size() > 0) {
Iterator fileIter = tempFiles.iterator();
while(fileIter.hasNext()) {
try {
File file = (File)fileIter.next();
file.delete();
} catch(Exception e) {
// Ignore
}
}
}
}

[http://java.net/jira/browse/JAI_CORE-121]

According to the comments on Java Net Jira this isn’t resolved so far. We have also a peek in the latest JAI code from trunk and it is not yet resolved. It is strange something can block upon termination. So, we ended up doing the timeout trick…nothing else came to mind, and there was already too much time spent on this.

We wrote all this in case someone else confronts with something similar, and if not to see how not to design things. Recently we have come across many Swing bugs, even filled one to Oracle.

The Core team was fighting a similar stopping bug with RMI from Sun (in the context of Tomcat and Jackrabit)…all these are not nice, in case that there is not even a workaround for some bugs.

The Muse™ Federated Search (MuseSearch™) and the Standalone Muse™ Proxy for Muse™ Proxy Applications are two client oriented services powered by Muse™ Technology. Being two separate services it is recommended to run them on separate servers (either physical or virtual machine) to have a clear separation of them from management and technical point of view.

However, there are cases when this setup is not wanted, hence it is required to run them both on the same server (of course only if the hardware characteristics are met). This is the subject of this article.

A Muse™ Proxy component (Software Integration Edition) is already part of Muse™ Federated Search by default for providing IP authentication to data service providers and to rewrite records URLs for end-user link navigation to native records or fulltext.

Technically there are two solutions for hosting the Muse™ Federated Search and a Standalone Muse™ Proxy for Muse™ Proxy Applications services on the same server. These solutions are possible due to the Muse Proxy functionality of binding to multiple IPs.

1. Muse™ Federated Search that includes by default Muse™ Proxy (Software Integration Edition) and a second Standalone Muse™ Proxy with Muse™ Proxy Applications component enabled, which means two Muse™ Proxy instances.

AdvantagesDisadvantages

  • The main advantage is that the two services are clearly separated.


  • This setup is pretty complex because it requires extra configurations and customizations in both Muse Proxy instances, not to mention maintenance work, like upgrading to newer versions.

2. Muse™ Federated Search with Muse™ Proxy (Software Integration Edition) but which also enables the Muse™ Proxy Applications component, hence a single Muse™ Proxy instance serving for both, MuseSearch™ clients and for the clients using Muse™ Proxy Applications.

AdvantagesDisadvantages

  • The main advantage of this setup is from the maintenance point of view as there is only one Muse Proxy instance to manage.


  • The monitoring and statistics are common for both Muse Federated Search and Muse Proxy Applications services, hence they cannot be differentiated;

  • The maintenance done for the Muse Proxy for one of the services will affect the other service as well, at least from the downtime point of view;

  • The high usage of one service will affect the performance of the other service.

 

Next are presented the steps for implementing the above 2 solutions.

1. Running two Muse™ Proxy instances on the same server as part of the Muse™ Federated Search and Muse™ Proxy Applications services.

  • The first step is to install the Muse™ Federated Search and the Muse™ Proxy needed for it. The installations of Muse™ Federated Search and Muse™ Proxy for Federated Search are not covered in this article. The only note is that the Muse™ Proxy server must be configured to bind to the IPs specific for the Muse™ Federated Search service.
  • Make the preparations for installing the Muse™ Proxy Applications service. Because a second Muse™ Proxy service will be installed on the same machine, there are several preparations to be made prior to the installation:
  • uninstall the Muse™ Proxy service used by the Muse™ Federated Search service; this is done by running the %MUSE_HOME%\proxy\UnInstallMuseProxyService.bat script on a Windows OS or ${MUSE_HOME}/proxy/setup/startMuseProxyServiceSetup.[sh|csh] on a
    Linux OS.
  • uninstall the Muse™ Proxy service used by the Muse Federated Search service; this is done by running the %MUSE_HOME%\proxy\UnInstallMuseProxyService.bat script on a Windows OS or ${MUSE_HOME}/proxy/setup/startMuseProxyServiceSetup.[sh|csh] on a
    Linux OS.

    • on a Windows OS rename %CommonProgramFiles(x86)%\InstallShield into %CommonProgramFiles(x86)%\InstallShield.MFS and %USERPROFILE%\muse-proxy-options.txt into %USERPROFILE%\muse-proxy-options.txt.MFS;
    • on a Linux OS rename ${HOME}/InstallShield into ${HOME}/InstallShield.MFS and ${HOME}/muse-proxy-options.txt into ${HOME}/muse-proxy-options.txt.MFS;
  • Install Muse™ Proxy according to the instructions from the Muse™ Proxy Install.pdf manual. During the installation process make sure to install Muse™ Proxy in a different location than the one used by the Muse™ Federated Search service and do not install Muse™ Proxy as a service when asked by the setup. Installing it as a service will be done manually. Do not start the new Muse Proxy instance yet.
  • Make the following postinstall configurations:
  • edit the MuseProxy.xml file from the newly installed Muse™ Proxy and add in the BINDADDRESS field the list of IPs for the Muse™ Proxy Applications service on which to bind;
  • edit the MuseProxy.xml file from the newly installed Muse™ Proxy and add in the RMI_SERVER_ADDRESS field the first IP from the list of IPs for the Muse™ Proxy Applications service;
  • edit the startMuseProxy[.|bat|csh] and stopMuseProxy[.|bat|csh] scripts from the newly installed Muse™ Proxy and after the line containing the Copyright statement add the following:
    • In the Windows OS scripts (startMuseProxy.bat/stopMuseProxy.bat) add the following line:

      set MUSE_HOME=location_on_disk_of_MuseProxy

      where replace location_on_disk_of_MuseProxy with the actual location on disk of the newly installed Muse™ Proxy.

    • In the Linux OS scripts (startMuseProxy[.csh]/stopMuseProxy[.csh]) add the following line:

      export MUSE_HOME=location_on_disk_of_MuseProxy

      where replace location_on_disk_of_MuseProxy with the actual location on disk of the newly installed Muse™ Proxy.

  • On Windows OS edit the InstallMuseProxyService.bat script from the newly installed Muse™ Proxy and change the line::

    set SERVICE_NAME=Muse Proxy Server

    to

    set SERVICE_NAME=Muse Proxy Server Applications

  • after the above added line add the following:

    set MUSE_HOME=location_on_disk_of_MuseProxy

    where replace location_on_disk_of_MuseProxy with the actual location on disk of the newly installed Muse™ Proxy.

  • Install the new Muse™ Proxy instance as system service as following:
  • On Windows OS run the following script from the newly installed Muse™ Proxy:

    InstallMuseProxyService.bat

  • On Linux OS copy the existing /etc/init.d/museproxy into /etc/init.d/museproxyapps and:
    • edit /etc/init.d/museproxyapps and change the value of the MUSE_HOME variable to point to the location on disk of the newly installed Muse™ Proxy;
    • configure the /etc/init.d/museproxyapps script to be started at boot by using system tools such as update-rc.d:

      update-rc.d museproxyapps defaults

  • Start the Muse™ Proxy Applications service as following:
    • On Windows OS go to the “Services” Management Control Console, locate the “Muse™ Proxy Server Applications” service and start it; or start it by running the following command in a Command Prompt window:

      net start "Muse Proxy Server Applications"

    • On Linux OS run the following command:

      /etc/init.d/museproxyapps start

  • Update the MUSE_HOME environment variable to point to the Muse™ Federated Search home location (default /opt/muse on Linux and C:\Program Files (x86)\muse on Windows). On Windows go to Control Panel->System->Advanced system settings->Environment variables, locate the definition of the MUSE_HOME variable and change it accordingly. On Linux this is done by editing the user profiles, individually per user in ${HOME}/.login or globally in /etc/profile.
  • Install the Muse™ Proxy service for Muse™ Federated Search; this is done by running the

    %MUSE_HOME%\proxy\InstallMuseProxyService.bat

    script on a Windows OS or

    ${MUSE_HOME}/proxy/setup/startMuseProxyServiceSetup.[sh|csh]

    on a Linux OS.

  • Start the Muse™ Proxy service used by the Muse™ Federated Search as following:
  • On Windows OS go to the Services Management Control Console, locate the Muse™ Proxy Server service and start it; or start it by running the following command in a Command Prompt window:

    net start "Muse Proxy Server"

  • On Linux OS run the following command:

    /etc/init.d/museproxy start

2. Using the Muse™ Proxy instance from the Muse™ Federated Search service to also serve the Muse™ Proxy Applications service.
The Muse™ Proxy instance from the Muse™ Federated Search service does not have included in the license Muse™ Proxy Applications, hence it must be upgraded to include Muse™ Proxy Applications. For this purpose the latest version of the Muse™ Proxy setup kit must be run with the acquired license that includes Muse™ Proxy Applications. This will be seen as an upgrade, hence the existing configurations of the Muse™ Proxy for the Muse™ Federated Search service will be preserved.

In this post we make a comparison between the features in Muse™ 2.6.0.0 and Vivisimo Velocity Platform 7.5-6, related to building and running Connectors/Sources. This comparison is exclusively related to Federated Search capabilities and the environment for building, maintaining, configuring and running Connectors/Sources. Although this comparison is made by the developers of Muse™, we tried to be as objective and fair as possible. If you have any comments or suggestions, do not hesitate to contact us.

The comparison was drafted when we successfully finalized the integration of Muse™ Smart Connectors into Vivisimo Velocity Platform 7.5-6. The searches against Muse™ Smart Connectors was made available by writing a configurable Vivisimo Source Template connecting to our Muse™ Web Bridge API. The Vivisimo Source Template was quite complex as we needed more statefull operations, but we wrote it once and now Muse™’s Smart Connectors, making sometimes tens of requests, can be straight forward integrated into a Vivisimo search. You will see below in the comparison table that writing Velocity connectors which include authentication/authorization, subscribed database navigation, fine grained content extraction, Full Text availability, is a very difficult and sometimes impossible task but this is where integration with Muse™ Smart Connectors comes to help.

If you are interested in benefiting of 6000+ Smart Connectors, with quality and authoritative premium content, in your Vivisimo Platform (IBM InfoSphere Data Explorer) please contact us.

PerspectiveItem MuseVivisimo Velocity Platform
The ModelWho writes the connector?The Muse team is creating the Source Package from A to Z following strict rules and Quality Assurance. Partners can as well create Muse Connectors with programmers.Partner creates sources from A to Z, except few templates and is responsible with maintaining them.
PhilosophyMuse’s philosophy was mainly for a central team developing the sources following rigorous and well documented procedures and naming conventions. The sources are uploaded into Global Source Factory after passing the Quality Assurance phase; the partner doesn’t have to have programmers to do further work or development of connectors, and the process is very well standardized.
Muse can also offer externalization of source building together with the set of procedures and tools.
Velocity philosophy is mostly that the partner deals with source creation (apart for some few set of templates sources) and hence needs to have programmers handling this.
TeamMuse has a team that is doing Smart Connectors for 14 years now. We have well established procedures running thousands of connectors out of the box.Velocity is more recent in business, and more recent focusing on meta-searching. One or two programmers can never compensate with the team of the product itself following all the internal procedures and Q&A and using tens of tools.
Federated search capabilites were only included in Velocity in version 3. They were not there from the beginning, this means that some of the things were added in as adjustments. Muse was build for federated operations (search) in mind from the beginning.
The Workflow/
life-cycle of connectors
WorkflowThere is a well defined Smart Connectors workflow from request to delivery time. There is meta information related to a Smart Connector so that it is safely identified: version, date created, build date, data service, type, protocol, status (released, defunct, defunct with replacement), etc. There are rigorous naming conventions.Unaware of any. Didn’t see any meta information to identify such meta information elements.
LogisticsThere is a whole network for the source management both for the developer and for the partners installing Muse. There are internal indexes with the development sources, there are external indexes with the installed sources, and many other elements.Although Velocity supports a master repository, synchronization is not done on a per source base, but rather on all of the resources (nodes). Sources are just a particular case. Without versions and other metadata fields it is hard to distinguish and be rigurous. Also merging at the level of code requires programmer expertise but not for an administrator.
ManagementEasily managing thousands of sourcesManaging ten or twenty sources could be OK, but managing hundreds of sources is impossible in Velocity.
PackagingMuse is using compiled and packaged connectors with well defined pieces. There are tens of manual pages about the content of Muse Source packages.Velocity is using interpreted connectors out of a single XML file for all the pieces a source involves.
Source Configuration and CapabilitiesConfigurationIn Muse configuration is detached from the source creation. It is a well defined and documented stage.In Velocity configuration blends with the source code itself, no clear distinction, besides this will be done on each partner in place.
In many places Vivisimo is incorectly using the term "configuration" for creating (developing) the connector (source).
Backup/RestoreBackup/Restore sources when updating, previous version can be restored.No version, no backup.
Parameter uniformizationWe strive for unifromization of thousands of sources to be able to edit them nicely in various Muse Administration Consoles.There is no uniformization of the source parameters, just few parameters are output for existent sources on Velocity 7.5-6.
Exampe: SSL CertificateIn Muse each entity is correctly organized in its directory, in this case ${APPLICATION_HOME}/certificates (Muse is also very permissive in having resources anywhere on disk but for simple administration conventions are involved). For example support for handling HTTPS SSL Certificates sources in Muse it is a simple administrative task.
There are visual selectors to upload the certificate file and you know precisely what to do where – this is not even a programmer’s job.
In Velocity you have to have access to the file system to upload a SSL certificate and you may have to modify the code of the source in order to change the reference to the HTTP certain certificate (in case it is the first time or you try to keep up some naming conventions).
"ssl-cert NMToken Full path to a file containing an SSL certificate to be used for HTTPS connections."
There is no documentation on any recommendation of how to handle the files and if naming conventions or certain directories should be followed.
AuthenticatorsFor sources with authentication one can select from the authenticator library the suitable authenticator.
For example sources using Web Access Management such as EZProxy can be configured in Muse. Authenticators are pluggable to more connectors and depending on their capabilities and can be interchanged. For example EZProxy authenticators can be configured for more connectors without the need to have the connector re-written.

There is metainformation in the Muse logistics tools that accounts for these relations. In Muse there is a lot of semantic and parsing of the response in the authenticator library.
You have to write an authenticator directly in the source code. No pluggable authenticators – just a simple login function which transform two parameters into CGI parameters.
So, just a syntactic help for the input side of the HTTP protocol but no semantics. Also there may be more HTTP requests needed for authentication not just a simple one. There’s also the case when it is necessary that the first request is not done by the authenticator.
Proxy PAC/Proxy redundancyComplex proxy selection logic via interpreting standard Proxy Automatic Configuration Java script files (Proxy PAC). A particular case of this configuration is proxy redundancy for various data centers. The requests made by the connector to the source will pass through the proxy selected.No Proxy PAC or other automatic interpretation.
Only a single proxy can be specified per project (application) or per each parse action (inside the source).
URL Rewriting and MNMRewriting URLs for session migration from the server to end-user – otherwise the record URL will not be accessible from the browser session. When the end-user will click the URL will go to the Muse Navigation Manager (MNM) without the need for the end user to have any proxy in his/her browser. Complex rewritten of the page will happen and the MNM will act on behalf of the end-user.Velocity does not have rewriting capabilities for the record URL, neither has a reverse proxy or other navigation manager available.
Importance of URL rewriting and MNMIn Muse Global Source Factory out of 6135 Sources 3368 requires MNM rewriting for having a functional Full Text (record) URL. That is over 54% of the Smart Connectors.In Velocity this can only be offered through Muse connectivity.
DifficultyProxing and URL Rewriting is done transparently and nothing has to be coded in the connector. It is just a matter of configuring the Source Package or the application to specify running or not through a proxy and the MNM rewriting pattern [pattern which is generally pre-configured].No Proxy PAC, and for Proxy if it is set at source level (and not at the project level applying to all the sources) the source code must be modified.
Source OrganizationIn Muse you can have the same source called from different application and configured with different details.If you need the same source in two different applications (projects) with slight configuration difference (e.g. a different authentication type) you will have to rename the source differently because the sources are held globally in the Velocity instance.
Although Velocity allows for end-user login details to be passed transparently to sources a source can have more differences depending on the application (such as, for example, the Home URL). Also in practice organizational access is wanted.
Building (creating) connectorsGeneralNo sense for comparisonIt is even about writing the code directly, where the Velocity system is limited and the functions available are just for the very simple tasks, while in Muse there are plenty of APIs to use for various connector functions. Besides the things totally unsupported in Velocity system (such as URL Rewriting, Proxy PAC) there are other tasks that cannot be accomplished, such as mapping very distinct query grammars or the ones exposed in the next cells in the Query part.
For binary protocols in Velocity there seems to be no direct support for plugging in libraries so workarounds such as creating an HTML/XML bridge are followed – for example the Z39.50 sources are queried through another distinct layer API (a HTTP CGI, not pertaining to Velocity), instead of querying directly. Hence for those connectors you externalize the whole logic and do it as you can if not offered by Vivisimo company.

Even for HTTP based protocols or simple extensions the capabilities of the Velocity system seems limited. We have seen cases where for the Yahoo! BOSS API the partner used another server side PHP totally outside of the Velocity system because Velocity wasn’t able to handle the authentication part.

This is not solid and is not configurable, neither standard – it’s just a workaround solution but it cannot fill the big gap in the capabilities of Velocity 7.5-6.

This could have been easily integrated in a Smart Connector in Muse because the package handling that connectivity exist in Java and our Smart Connectors have adjacent mechanisms for managing all the necessary libraries at runtime.

Velocity has other disadvantages for facing federated searching – URL rewriting and Navigation Manager for full text navigation is one of the most critical thing. Of course a Muse Proxy Software Integration Edition could help on this.

Under this light it may not even make sense to go on with the comparison to see how things are done differently related to building connectors, because in Velocity some very important and critical things cannot be done at all. The practice also showed that even simple cases could end up wrongly coded in Velocity.

Still, to sustain the above statements, the comparison continues below with exact evidences.
Connectors APIs and Connector Code CapabilitiesIn Muse even if not everything is covered by tools/wizards that generates the code, there are library APIs (in modulesutil.jar) which cater for the necessary functions. The modulesutil.jar is simply updated (hot deployment) if necessary on any installation.
Everytime we consider we have more similar instances of pattern we create a new API or extend an existent one.

And only for the unlikely event of being unsuitable you would have to write directly in Java. But at least it is Java and not a custom language you have to learn.
Even writing the code directly in Velocity cannot resolve certain situations such as parsing backward, Proxy PAC, complex queries, and many others as exposed in this report.
External protocol API librariesSources based on external development kit API (either text or binary) can easily be integrated in Muse and even delivered to an existent old installation without modification in the core related parts.
We have logistics for packing all these libraries together with the source package and ensuring every plugin connection in Muse.This is done even automatically and supports hot deployment at run-time (no server restart).

Few examples for when the necessary libraries (jar files) are delivered together with the connector: JSON.jar, tn5250j.ja, Database drivers, etc.
Different source protocols that needs external sepparate API (developmen kit) libraries cannot fit in the system. This has to be implemented as standaolne external Web application which expose a HTTP text (HTML/XML)API – a bridge in other words.
This is a non standard, non uniform, time consuming creation, and mean losing initial protocol binds when creating the records/fields. Not to mention it has a very custom setup to be working and the partner would have to be careful with maintaing that piece of code as well.

Examples are for Z39.50 or the workaround provided for Yahoo! BOSS API by using another server side PHP bit – this could have been easily integrated in a Smart Connector in Muse.
Component SepparationMuse has clear definitions about a Source Package (Smart Connector). In Muse entities (Configuration Profile, Query Translator, Authenticators, ExParser to name just a few) are correctly separated. These entities can then be used for other Source Packages as well. Also different programmers can work distributed on these components.In Velocity everything is inside a single XML file which is the code of the connector. And writing such a connector from the scratch is sometimes considered as a "configuration" job but it really is coding.
StepsBecause HTML connectors are still the majority mainly we include most steps such a connector may need to do.
The main steps a connector may need to be coded for are the following ones:
– Authentication (sometimes navigation to the authentication page) *
– Use session if necessary to save/load persistent native items*
– Navigation to Database (can be several pages)
– Selection of Database
– Navigation to the Advanced Search Page
– Query translation *
– Query capabilities and remapping*
– Perform the search *
– Fetching and Parse results page (more requests if necessary) *
– High performance extraction (citation parsing, date formatting) *
– Extended Parsing – one request for each record *
– Record Normalization *
– URL Rewriting *
– Error handling*
The starred (*) ones are applicable for the API connectors as well.
Velocity only declares in documentation the following items as part of source creation and execution steps. The rest are mainly adjustments for fitting in:
– Translating the query
– Fetching Result Pages
– Parsing/Normalizing the Result Pages
Problem decompositionMuse offers API and tools for the fine grained steps. Muse tools go helping into very deep details such as citation parsing extraction and date formatting, mapping from Muse normalized query to dozens source grammar types and thousands of variations.Velocity uses a coarse grain division, and let the details for the programmer to take care in any form (s)he wants. The tools ar for the coarse grain level.
ToolsIn Muse the Muse Builder IDE (Integrated Development Environment) groups the tools necessary for development side. This also includes logistics tools to guide the workflow between the programmers tasks and Team Leader tasks and improves very much the work efficiency. There are also tools for the Quality Assurance. Below are the most important tools to be used for development:
Source Package Assistant
Connectors Generator
Search Query Translator Generator
Source Package Testing
Each of them has sub-tools to be involved.
There is a clear separation for the Muse Admin Consoles where only configuration is done as well as for other tools for source infrastructure.
The Velocity admin tool (same tool where creation and adminstration takes place) is mainly a set of variable configurations and addition of blocks, but for the most cases there is no visual configuration or editing of that block (for example the secondary parser for complex query transformation, additional parsing logic – any case that is more than clearly delimiting a field).
All these in the context when even writing Velocity code directly is not solving the problem.
QueryTree vs String In Muse the normalized canonical query (ISR) is an XML tree on which the source translator is applied. Because we work directly on the tree we can convert to all the grammars type and not just to the one to one correspondences.In Velocity the normalized canonical query although represented as a tree XML at some point is lost and the source form can only act on strings to do the transformations. There is only a one to one (no change in the structure of the grammar) possibility of mapping:
"The default form template for search engine sources lets you specify one-to-one matches between field names and content names." [http://.../vivisimo/cgi-E18bin/admin?id=configuration-syntax-fields]
Different Grammar typesEvery grammar can be handled in Muse and even visually through SQTG. We can convert into totally distinct grammars such as non parenthesized, postfix, infix, splitting and combining terms and operators in their own fields, either simple or with indexes (numerical or literal).In Velocity there is no possibility for handling different grammars (postfix, infix) or separating binary operators in explicit CGI terms:
"String placed between the two operands (in Vivisimo an operator is binary if and only if it has a middle-string specified)."

In Velocity these items are not supported by the raw language itself, not even talking about tools.
Visual toolsThe translators are generated by the visual Search Query Translator Generator (SQTG) tool where visually you specify what the mappings are, what functions apply over values, over already mapped sequences, you specify the grammar type, the operator grouping, etc. If there is something not supported by the generator it can be written directly in the ISR XSLT.If you want to benefit from visual assistance you have to add all the operators and fields in a common repository so you will be mixing one content provider mappings with another just to be able to visually select sources. You cannot visually configure a mapping applicable just to one source if it is not defined globally in the operator section.
"Whenever you create a form using the Standard Form template, a list of checkboxes (corresponding to the list of operators defined in the operators section) will allow you to quickly select which operators are supported.

To extend this list, just go to the operators section and create new operators. See the online schema documentation for a complete specification of operators."

Also the philosophy is confusing in making the fields a special case of operator so when you map fields you map operators.
Query pre-mappingIn Muse you can do query remapping at runtime. In case some fields are not supported in Muse these can be easily remapped to other supported fields (say :TITLE to :SUBJECT) via the Pre-mapping functionality.In Velocity you either send or don’t send the query to a source that is not supported (strict or optional) but you cannot do further pre-mappings.
Other examplesIn Muse, although not visually, we can group between a thousand of bib attributes, depending on the Z39.50 native source capabilities.Z39.50 source template in Velocity only allows for 2 terms type (author and title).
Complex mappingsThe complex queries can visually be taken into account as explained above.There is a small way out for mapping non one to one source queries in Velocity, by applying a parser over what the source form initial generate (a parse element).
This is complex and quite confusing (a parser to parse a parse element). It implies the knowledge of what the interim XML language looks and forces you to write manually string processing in XSLT.

Besides the fact that there is no tool for this, once you lose the expression tree you will have to reconstruct it to be sure you map it one to one. This requires lexical and structural analysis – doing these analysis in XSLT is next to impossible and error prone.

Only particular cases or a limited number of terms/operators might be adjusted for.
Source Query CapabilitesEach source can expose its search capabilities at runtimeNo.
Dynamic limitersDynamic limiters depending on the capabilities of each source are as well supported – each source will be receiving individual queries according to its capabilities if the client (interface) will send the query accordingly.No.
Parsing/ Data ExtractionMore parsers are necessaryParsing pages is necessary both for the main extraction but also for navigation steps up to the search point, and also if it is the case for the Extended parsing. The practice showed there are thousands of instances where at least one more request (and this is not the authentication part) is necessary before the search is done.The documentation and the standard visual configuration for Velocity is with just one parser, there is room for just another one (a login parser). Also this is the only situation allowing for a wizard, and even in this case if the parsing for the record page is just a little bit more than simple markers you need to write it by hand.
Muse Connectors Generator supports as many parse instructions as necessary through visual configuration.Visually no, but the Velocity XML language supports the chaining of multiple parsers (say for a potential navigation) but that is not done in a natural way and it requires a higher expertise programmer and eventually deep training.
The language interprets a parse instruction and replaces itself with another parse instruction which normally generates a record. But in case of another request you need to generated another parse instruction in the output which will be interpreted and so on.
We are talking here about parsing intermediate pages to get navigational elements (such as, but not limited to: sessionIDs, database IDs, URLs, cookies, many other hidden fields). There are thousands of sources that natively don’t function if you don’t go step by step and perform the steps a human does when accessing it. Not to mention the case for extended parsing. These can all be done through Muse Connectors Generator.Below is what Velocity documentation says about just a single case, the one with sessionID:
"Some sites do not rely on cookies for authentication, but specify a session ID carried on as a CGI parameter. Vivísimo Velocity can comply with this, but direct configuration of the source’s XML is required. The page specifying the session ID must be parsed, and the session ID must be saved in a variable." – and there are sources that needs about 3 or 4 such steps, gathering and using variables.

Although possible in the raw XML Vivisiom language there’s no wizard/tool, and not even an API through library functions and as explained above the chaining could be very misleading.
AuthenticatorsAuthenticators are pluggable parts of the Source Packages and a single connector can have more authenticators depending on the partner and environment. For example, many web sites allow users to login via two or more different entrances to the site. For each
entrance, there is a path that leads to the site; the paths converge at some point. The authenticators enter the web site and navigate to a common point for each source.
Logon (that is the Velocity term) is bound to the source. No possibility to interchange or use multiple modules.
Just a syntactic help for the input side of the HTTP protocol but no semantics – while in Muse there is a lot of semantic and parsing of the output response in the authenticator library. Also there may be more HTTP requests needed for authentication not just a single one – Muse supports this.
In Muse the programmer can select from a myriad of existent authenticators or the programmer can write a new one using the documented API library and following the procedures.
Hence a source can have more authenticators, and the same authenticator may be used for different sources, so there exist a pluggable mechanism in Muse to offer this – in Velocity there is a strong relation between the source and its logon (note that authentication could mean more than just a "logon" – that is why Muse is using the term "authentication").
Effective parsing and wiringDynamicsIn Muse Parsing starts while the page is coming (even for XML Sources as we are using SAX parsers combined with DOM just for record), not after it ended.In Velocity the page needs to arrive entirely.
TypesThe Muse Connectors API and Muse Connector Builder has tens of processing possibilities for extractions – regexp is just one amongst dozens. To name just a few there are rule based extractors, rejection rules, approval rules, string token based rules, index rules, estimate parsing, table header parsing, HTTP Header parsing, date formatters, citation extractors, etc.In Velocity there are just two types of parsers: one based on XSLT and one based on Regular Expression. These with very small variations as below:
"html-xsl: Same as xsl except that it is preceded by an HTML to XML conversion. Since HTML is ambiguous, the HTML to XML conversion can be done in many ways. Vivisimo will try to close unclosed tags, add missing tags (like html and body), escape entities, and perform other normalizing steps."

Hence the html-xml conversion may not be reliable because the conversion from HTML to XML at some point needs to be done heuristically based on best guesses.
In Vivisimo Velocity 7.5-6 there are discussions about introducing the Java parsing in Velocity for compensating the capabilities of the existence of the two parsers and allow for more flexibility.
"java: Instantiates and runs a Java class on the input data. This parser type is not finished and should not be used."
Wizard, Tools?Muse Connector Generator could show you dozens of possibilities at each step: CONDITION, SET, SETTER, SET_URL, REPLACE, REPLACE_FIRST, REPLACE_ENTITIES, CLEAR_TAGS, REMOVE_MULTIPLE_WHITE_SPACES, REMOVE_EOL, REMOVE_HTML_COMMENTS, REMOVE_HTML_SECTIONS, REPLACE_IN_QUERY, GET_VALUE_FROM_QUERY, ADD_COOKIE, REMOVE_COOKIES, TOKENIZE, TOKENIZE_FROM_MULTIPLE_SOURCES, IF, ADD_TO_VECTOR, ADD_TO_httpProperties, GET_FROM_httpProperties, CLEAR, JAVA, CALL, RETURN, SOURCE, RULES, SPAN_RULES, MAPPINGS, SKIP_HEADERS, TABLE_HEADERS_PARSING.
Muse Connector Generator also offers autocompletion for thousands of fields in the Data section for the default model or any other data model as we have many other Data Model types and all are supported by the Muse Connector Generator.
Velocity only has a list of XPaths for each field to be configured for XML extraction or regexp delimiters and few other – in total about 18 settings for each parser (and this considering all the record fields – author, date, title) – that is all the wizard for extraction configuration.
Just about 10 fields can be visually configured and this just for very simple extraction cases.
Positional extractionBased on a certain index position for text records (e.g. author is on postion 0 to 20, title 22 to 40) – Yes, through Index Rule.No.
StatesMuse Connectors Generator has dozens of state types.Velocity has just one type of state (In case of regexp)
Direction of parsingThe parsing rules can function in both directions, both forward and backward, in order to extract a certain entity. There can be more rules that go back and forth multiple times to filter down the extraction not just two rules for the heads.
There are also more configuration items to the matching rules not just the rule itself: case aware, what to find (the start, the end), action to take with the cursor, index positions, etc.

Going back and forth in the string is necessary for being sure we get the most invariable markers so that the connector is reliable and tolerant to changes.
The regexp matching in Velocity together with its state can only move forward in the string, and this could make extraction harder or even impossible for cases where the entity to be extracted is only identified by an element in the middle or in the end of the entity.
Such a case is when for a record or field we need to identify, for example, a checkbox in the middle of it, and then go to the first element backward. If we would have identify the start of the record (or field) just by we would have risked to take some wrong input because in this example there are more elements designating other items that doesn’t have to do with the record.
Citation ParsingFor citation parsing extractors we have a visual generator, an automatically builder for citation extractor assisted by the programmer in just acknowledging the correctness of the extraction. It smartly matches dozens of pre-existent patterns to extract the subfields for citation: CITATION-JOURNAL, CITATION-VOLUME-ISSUE, CITATION-JOURNAL-TITLE-ABBREVIATED, CITATION-VOLUME-URL (are just a tiny part of the whole citation lot of fields). The tool applies the extractor patterns and provides results for all. The programmer just acknowledges the best results and can use many varied inputs to make sure of the correctness and reliability of the extraction.
The Citation Parsing Builder also assist in grouping the rules, so that the string is decomposed top down and the best matching extractors (when we know for sure there is a date or a journal issue) are applied initially, the remaining string is thus refined and the most difficult extraction takes place in the end so that it does not have false positive elements.
Velocity does not have this notion. It is not possible to obtain citation through just regular start and end regular expression for an entity, even if it would be to write them from scratch.
Also obtaining these through an XSLT parser (in case it is an XML connector) written by hand from scratch is very hard, very time consuming and could only be particular to a certain source. There is also no tool for this.
Polymorphic outputAs stated above Muse Connectors are using logic to get the best out of sources when there is a polymorphic output – output changes either expected or unexpected showing more formats.
For example, there is support for parsing content from result pages with different structure. The connector can be written to analyze the structure of the current page and to activate a dedicated parser for such page.

A particular case is the single record parsing, but there are also other cases – there is support in Muse Connectors Generator.
No or extremly hard to achive, just with XSLT. Even if Java parser become available meanwhile it is less probable some Java API will also be available – not to mention tools.
Non-linear parsingSupport for parsing record data from different sections of the page even for the case when the records are not identified using a clear block of text in the page (not following the HTML raw text flow).
For example the site may return the results in a table: the first line contains the images for the first 3 records, the second line the descriptions for the first 3 records and the third line other information for the first 3 records.

In such case the data for a single record must be parsed from 3 blocks of text (the first TD element from each TR element of the table), the data for the second record is parsed from other 3 blocks of texts (the second TD element from each TR element of the table)) and the data for the third record is parsed from other 3 blocks of texts (the third TD element from each TR element of the table)).
No in regexp as backword direction is not supported.
In XSLT this would be extremly difficult.
Category parsingSupport for category parsing. Based on the section of the page the results are parsed using different code. Then there might be present a "more results" native link for each category of results in part. The parsing for each "more results" from a certain category will be done using the parser for that category.No.
Other ProtocolsParsing JSON stream and any other protocol for which Java APIs are available.No.
XML QueriesWorking with WEB Services. Sending complex XML queries defined easily through external files and parsing complex XML responses from any number of XML streams and integrating the results in a Muse record.In Velocity you need to apply a query parser over the initial form generated parser to cater for these and you have to do this without any developer API just in the raw language. Also not everything can be obtained with this parser as it was described in the Query section.
XML ExtractionPowerful XML API that can be used to parse the XML content as it comes without waiting to receive the entire page. While parsing content from the current stream it can open other xml streams and complete the current record parsed with data parsed from the other xml streams.No.
ComplexityPer short, any possible parsing on any number of levels which has a logic can be done with Muse because it finally uses java code and any complex parsing algorithm can be implemented in Java. But for 90% of the cases we have generators and APIs.No, or very hard, next to impossible, to achieve; also the parsing could end up unreliable.
Debuger/InspectorsIn Muse Connector Generator we have a Step by Step debugger. Also visual inspecting of records extracted from the source through post generation tools such as SP Testing is possible.Just log traces and requests/response logs.
Visual extractorsBesides the Citation builder we have an experimental extractor based on visual record extractor – the programmer visually selects the delimiters and the rules are automatically generated.No.
MultilevelIn Muse the data extraction can be saved on multi-level fields. This is important in case you need to retain certain bindings/grouping between the fields.No – only flat.
Extended ParserWe have support for Extended parsers that is extracting all the fields from the detailed record page.Very difficult to write, no visual tool, hard to understand the resulted XML code.
Session Reusing Straight forward support for session saving and loading parameters which for example allow for session reusing where it is the case.In Velocity you need to do custom settings per the project and alter the main XML or other XMLs involved in the process – the task is not easy requiring very high expertise.
Proxy and URL RewritingProxing (including the proxy selection via interpreting the Proxy PAC) and URL Rewriting is done transparently and nothing has to be coded in the connector, no matter how much requests are performed.
The programmer needs to make sure that the Cookies, referrer and any other authorization elements are added into administrative fields of the record.

The rest is just a matter of configuring the source package or the application to specify running or not through a proxy (or the PAC depending on the destination) and the MNM rewriting patterns.

Of course if the source does not require to have the URL rewritten then the programmer will not configure any pattern in the source profile.
Only Proxy can be involved and if the project level proxy is not configured then the programmer has to code the proxy into the source code itself for each parse instruction.
Error and Progress ReportThere are many error codes and possibilities to report from the connector as well as the status to update the progress, also fully internationalized and localized.There are many error codes and possibilities to report from the connector as well as the status to update the progress, also fully internationalized and localized.
Quality AssuranceBesides the riguorus procedures, file versioning, issue tracking the team implements, the SP Testing tool can be used for dozens of search test cases also allowing for an easy comparison between the native behaviour and the Smart Connector behaviour.
This is ensuring quality in every detail, either you search a simple query term or you search four multi word different terms with different operators. There are multiple scenario possible.

Visually inspecting the extracted fields, either they are two or they are hundreds.
While Muse has a real tool for this with interactive action in Velocity there is just a test configuration screen with about 15 input boxes of a query, number of results. This is more as our Source Checker tool or test screen in Muse Source Package upgrade (which is not part of the building phase anyway).
No support for record Visual inspection.

Because a picture is worth a thousands words see the below images showing screens in both Muse SP Testing and Velocity Testing

 

The Source Package Testing tool supports performing individual tests only on selected cells, where a cell identifies the search query performed and the search attribute.

The test search results can be investigated in detail by being displayed in a browser where they can be filtered in various ways (e.g. shows the records that do/do not contain certain fields) or compared to native raw view.

In the Statistics Tab all the fields present in the records are identified and displayed and also the number of appearances for every fields can be found in a graphical representation.

Velocity Administration Tool offers just a test configuration screen with about 15 input boxes of a query, number of results. No support for record Visual inspection.

In practice there are front end applications where the entire data set cannot be available at once from the underlying service (e.g. a Search Engine) or it is impractical to wait for the entire data set – for example a big and quite remote database. In these cases if the underlying service supports pagination we could use an OutlineView displaying a moving window of records, and add external pagination controls. Or, depending on the application requirement it can be that only the first 1000 records are to be displayed and the rest ignored.

Whichever the option is we may wish further functionality from the application such as sorting. But to correctly accomplish sorting we need to do this via the underlying service which has access to the whole data set. We could add some extra sorting panel besides the OutlineView and control from there the sorting options and trigger the sorting.

But we like the natural way OutlineView behaves when sorting the local data by the header column and their combinations so we consider the end-user would have a better experience if triggering the service sorting by the use of the OutlineView table header controls. So we decided we want to invoke the service sorting by the use of the OutlineView table header and to display the service response of that sorting in the OutlineView.

For simplicity, assuming a pagination of 10 in a Customer View with 57 records, if you sort ascending by “Last name”, in the first page we wanted to see the first 10 records based on the last name order from the entire set:

ExternalSortingLastNameAsc

while moving to the 6th page we expect to see the last records, including the null ones, without any further action on the table header:

ExternalSortingLastNameAscLastPage

Back to the first page when we switch to the descending sorting we don’t want those 10 records from the first page reversed, because there are far more records in the database – we want to see the last 10 records (with non-null Last name) reversed, i.e. those starting with Chinese and Z:

ExternalSortingLastNameDesc

As OutlineView would be sorting just the local slice of records, using the internal sorting algorithm, several delicate aspects were encountered and solved and briefly that is what we accomplished:

  1. Intercepting the column header sort actions.
  2. Getting all the sorting information (column, order) to create the service request.
  3. Recreating the tree and table with the data received. The data in the current page can change entirely in the complete (external) sorting.
  4. What happens to local sorting on the above data as sorting algorithm/comparators can differ and we didn’t want to modify the OutlineView code. We need to make local sorting invariable (an Identity function using an empty comparator)…but still null values were shown first in OutlineView – we found a solution for this as well.
  5. Persistence – what happens after we close and reload the OutlineView component as the sorting properties are persistent.

To exemplify we will be considering an SQL-like syntax for querying the remote service regarding the sorting options, that is via the Service API we will send
ORDER BY COL1 ASC, COL2 DESC, COL3 ASC ...

For example in the snapshot below where we group three columns for sorting,

ExternalSorting3Columns


recordsTable.getOutline().getTableHeader().addMouseListener(new MouseAdapter() {
@Override
public void mouseClicked(MouseEvent e) {
if (e.getButton() == MouseEvent.BUTTON3 || e.getClickCount() != 1) {
// The other buttons are used for sorting; we are not interested.
return;
}


int column = recordsTable.getOutline().columnAtPoint(e.getPoint());


if (column < 0) {
return;
}
// Although we could receive the event for column resizing nothing will change because no column is found as sorted.
TableColumnModel tcm = recordsTable.getOutline().getColumnModel();
if (tcm instanceof ETableColumnModel) {
ETableColumnModel etcm = (ETableColumnModel) tcm;
TableColumn tc = tcm.getColumn(column);
if (tc instanceof ETableColumn) {
ETableColumn etc = (ETableColumn) tc;
if (etc.getNestedComparator() != equalityComparator) {
etc.setNestedComparator(equalityComparator);
}

// In the future we may improve and keep an evidence of only the sorting columns without looping through them each time.
customizeSorting();


// For now it is not worth to do work for keeping the selection as we are multipaged.
recordsTable.getOutline().clearSelection();
}
}

// Here we call the external service doing the sorting and recreating the nodes to providing the new slice.
applyOptions();
}
});

An important method in the code above which is getting the sorting parameters (item #2) is customizeSorting() – it is depicted below. Besides calling it from the MouseListener, we also call it when our Top Component is opened (componentOpened()) together with calling a method to set the custom nested Equality compartor as well, after reading the persistence settings of the OutlineView to ensure that the OutlineView remains sorted in that particular configuration after reopening (item #5). Basically we identify that a column is being clicked for sorting purposes and see its sorting state and rank and add it to the previous sorting states. For now at the first version, we actually loop through all the visible columns and do this for all columns with isSorted() returning true.


protected void customizeSorting() {
// Identify the columns in the external service
String[] columns = getColumnOrderBy();
// Default column used for sorting

int sortedColumnIndex = getDefaultOrderByColumn();
// Default column used for sorting

String sortedOrderType = getDefaultOrderType();
// Member variable used as well by the applyOption() method.
orderBy = "";
boolean sortedFound = false;
try {
int numCols = recordsTable.getOutline().getColumnModel().getColumnCount();
ETableColumn[] orderByCols = new ETableColumn[numCols + 1];
// Hidden columns are not here. Possibly in the future versions of NB.
for (int i = 0; i < numCols; i++) {
ETableColumn etc = (ETableColumn) recordsTable.getOutline().getColumnModel().getColumn(i);
if (etc.isSorted()) {
if (etc.getSortRank() < orderByCols.length) {
// We remap the column to take into account the real ranking.
orderByCols[etc.getSortRank()] = etc;
}
}
}

for (int i = 1; i < orderByCols.length; i++) {
if (orderByCols[i] != null) {
// This means the column is sorted.
ETableColumn etc = orderByCols[i];
sortedColumnIndex = etc.getModelIndex();
if (etc.isAscending()) {
// sortedOrderType = "ASC NULLS FIRST";
sortedOrderType = "ASC NULLS LAST";
} else {
sortedOrderType = "DESC NULLS LAST";
}
if (sortedFound) {
orderBy += ", ";
} else {
sortedFound = true;
}
if (sortedColumnIndex < columns.length) {
// As there could be multiple columns in the real model bounded to the same column we need to add the sortedOrderType to each.
// For the usual case a line as the one commented below suffices
// orderBy += columns[sortedColumnIndex] + " " + sortedOrderType;
orderBy += DatabaseLicenseOperations.createCorrectOrderBy(columns[sortedColumnIndex], sortedOrderType);
}
}
}

if (!sortedFound) {
// Default column (which usually is the node column) is used.
if (sortedColumnIndex < columns.length) {
// As there could be multiple columns in the real model bounded to the same column we need to add the sortedOrderType to each.
// For the usual case a line as the one commented below suffices
//orderBy = columns[sortedColumnIndex] + " " + sortedOrderType;
orderBy = DatabaseLicenseOperations.createCorrectOrderBy(columns[sortedColumnIndex], sortedOrderType);
}
}
} catch (LicenseException ex) {
ex.printStackTrace();
}
}

The columns array keeps a mapping between the table column index and the column(s) in the service/database corresponding to it. Based on this the ORDER BY value is created using the helper DatabaseLicenseOperations.createCorrectOrderBy(...) method to eventually split the multi-field columns and add the orderType to each database column. For example, in a view for a product version we could have a single logical column named Version made up of "major, minor" database level fields. We need to transform it into "major ASC NULLS LAST, minor ASC NULLS LAST".

The internal Table sorting will effectively and visually take place after we return from the MouseListener method. Because we recreate the nodes and properties, using the new records slice in the external sorted order, via the call to the applyOptions() method in the mouse listener method, these new values are used by OutlineView for its sorting and display. In this way we resolve item #3). The only drawback (which we can afford in terms of execution time) is that sorting will also happen in the OutlineView – so there are two sorting operations on the data, the internal one being only for the exact slice of records (not for the entire data set). Because internal sorting order can differ by the external sorting order from the service/database we set the custom nested comparator defined below, resolving item #4:


/**
* Equality Comparator so that the sorting order is kept the same
* as the one from the service/database.
*/
static protected class EqualityComparator<T> implements Comparator<T> {


@Override
public int compare(T o1, T o2) {
// Always o1 = o2 so that the order from the service/DB is used.
return 0;
}
}
/**
* Used for referring the same nested comparator for all the columns.
*/
static protected EqualityComparator<Object> equalityComparator = new EqualityComparator<Object>();

We also set the Update Selection on Sort on false, as there could be rows from the other pages now present and the selection does not make too much sense between pages.

recordsTable.getOutline().setUpdateSelectionOnSort(false);
There was still one more thing to resolve, namely the null values. In case of a null value in a column, the internal sorting is returning from the comparator method before calling the nested comparator when seeing null values. Hence the nulls will always add in the first positions when sorting ascending, respectively in the end when sorting descending and we wanted to be able to list null values at the end for any sorting order, actually controlling this from the external service/database sorting.

If we used the empty string instead null then the nested comparator will be called and through our equality comparator we can keep the order from the database. Hence we actually converted the null value into empty strings in the method getValue() of our PropertySupport.ReadOnly.getValue() implementation that our Properties Sheet set is made of.

This setup can as well be used without pagination or with other data set availability scenarios in case the externalized sorting is necessary just as being the requirement by itself, for example related to i18n aspects such as collation, transliteration. There are also cases when sorting such as by ranking is done via internal attributes in the underlying service and these are not publicly exposed. To obtain the correct sorting order the service sorting operation should be used.

The usage of electronic services such as electronic banking, electronic commerce or virtual mails becomes more commonplace in the present. Therefore there is an increasing need for using digital certificates to establish authenticity, digital signatures, or encryption of personal data. This requires the ability to handle cryptographic material such as private / public key pairs, secret keys, or digital certificates, in other words, the ability to create key pairs and store them into different keystores, or exporting only the certificate into another keystore, the possibility to use a private key to digitally sign a document, and many others. These can be achieved easily using CERTivity due to its intuitive GUI and structure.

The following scenario can give a hint of how easy it is to work with key pairs and certificates in CERTivity.

The user wants to generate a self-signed key pair, store it into a KeyStore, and then, copy only the certificate from the self-signed key pair and store it into a different keystore (for example the cacerts keystore, or another truststore like the Windows Root CA KeyStore).

Such a scenario can be found frequently in real life. For example, we can suppose we have a server in Java and we need to connect with a Windows Client (which could simply be the browser or a custom Windows client) and the SSL layer is used. When using connections over the SSL layer, the authentication is performed using a private key and a public key. Usually the private key resides on the server side, while the public key is found on the client side. So that is why, it is important that after creating a key pair to be able to separate the private keys and certificates easily (as the certificates contain the public key).

The above mentioned scenario can be performed with other existing tools as well, but the steps needed to accomplish this would require creating the key pair and storing it into the keystore, then exporting the certificate into a file and then importing it again from the file into the truststore.

In CERTivity, this can be done in few steps without using any auxiliary files or export and import operations, just clipboard operations. We will consider that the KeyStore into which the key pair will be generated is opened and it is called “my-keypairs.jks”, and that we will want to copy the certificate into the Windows Root KeyStore.

The steps are the followings:

  1. Create a new self-signed key pair. Having the “my-keypairs.jks” keystore opened and focused, use the menu KeyStore > Generate Key Pair, or the Generate Key Pair toolbar button to open the dialog for creating new self-signed key pairs.Step1 Generate Key Pair

    The following dialog appears, allowing the user to provide the needed information for generating the keys and the certificate.

    Step1 Generate Key Pair Dialog

  2. Expand the newly created key pair node in the KeyStore (by clicking on the “+” sign in front of the key pair entry), and also expand the Certificates Chain node. The new generated certificate will be visible.Step2 Expand Key Pair Node
  3. Select the certificate node and copy it (by right clicking on it and selecting the Copy menu, or by using the CTRL + C shortcut.Step3 Select and Copy Certificate
  4. Open the Windows Root CA KeyStore. This can be done very easy in CERTivity, as it has a dedicated menu for that: File > Open > Open Windows Root CA KeyStore.
  5. Having the Windows Root CA KeyStore opened and focused, paste the copied certificate node (by using CTRL + V or Edit > Pastemenu. When inserting a certificate into the Windows Root CA KeyStore, there is a security warning displayed by the operating system informing that a certificate will be installed, and asking for the user's permission:Step5 OS Security Warning

    After clicking yes, the certificate will be imported into the Windows Root CA KeyStore, as it can be seen in the screenshot below:

    Step5 Certificate Pasted in Windows Root CA KeyStore

As it could be seen, to accomplish the above scenario, no “export to file” and “import from file” operations were needed, so this eases the work of the user a lot. In the example above we used for the trust store the Windows Root CA KeyStore, but the steps are the same for any other KeyStore, with the exception of the security warning issued by the Windows operating sistem, which only appears on Windows systems when inserting a certificate in to the Windows Root CA Truststore.

This was only one simple example of how things can be done easier using CERTivity, due to its user-friendly GUI, the way it is organized and due to the features that it provides, but there can be many more.

There are often situations in which we get to a website on a secure connection and the browser informs us that the website's security certificate is not trusted using a warning message similar to the one below (which can be seen when using Google Chrome browser):

Certificate not Trusted

This happens mostly when accessing websites of companies that are using internal CA certificates which are self-signed or are not signed by a known and recognized certificate signing authority. To be able to view these kind of websites, the certificate has to be trusted.

When clicking on “Help me understand” link, we will see some additional information about the problem, and in the last paragraph it is explained briefly without any details what should be done to avoid the security warning and access the website safely.

“If, however, you work in an organization that generates its own certificates, and you are trying to connect to an internal website of that organization using such a certificate, you may be able to solve this problem securely. You can import your organization's root certificate as a "root certificate", and then certificates issued or verified by your organization will be trusted and you will not see this error next time you try to connect to an internal website. Contact your organization's help staff for assistance in adding a new root certificate to your computer.“

Although the information is correct, it is not specified how one can import the organization's root certificate as a “root certificate”. More than this, some additional tools might be required to perform the necessary operations (like a tool for retrieving certificates from a SSL server, and a tool for inserting the obtained root certificate into the Windows Root CA KeyStore). Even with Internet Explorer you have to do more operations and export the CA Root certificate in a file, and then import it by opening that file from Windows Explorer and selecting import in one of the many locations. It is easy to get it wrong. This is where CERTivity comes in handy. Using CERTivity, adding a new root certificate to your Windows OS is easier and fast.

The main idea is to obtain the organization's root certificate and to insert it into the Windows Root CA KeyStore. To do that, one has to use the built-in SSL Certificates Retriever function to obtain the root certificate, and to import it into the Windows Root CA KeyStore, which can also be accessed easily through CERTivity. In more details, the simple steps that have to be performed using CERTivity are the following:

  1. Open the Windows Root CA KeyStore (if not already opened). This can be done very easy in CERTivity, as it has a dedicated menu for that: File > Open > Open Windows Root CA KeyStore.
  2. While having the Windows Root CA KeyStore opened and focused, open the SSL Certificates Retriever. This can be done using the menu KeyStore > SSL Certificates Retriever (as seen in the screenshot below) or by using the SSL Certificates Retriever button from the toolbar.Open SSL Certificates Retriever MenuThe SSL Certificates Retriever dialog will open allowing the user to retrieve the certificate from server:

    SSL Certificates Retriever Dialog

    This dialog allows retrieving certificates either by inserting a HTTPS URL, either by entering the host and port of the server from which the certificate should be retrieved.
    When inserting a HTTPS URL, the host and port will be automatically extracted and the “Host name” and “Port” fields will be filled in. The default port used for HTTPS is 443, but a custom port can also be specified by putting it in the URL according to the URL specification.
    If the user wants to use a certain host name and port number and does not have a HTTPS URL, he can do that by selecting the second radio button below “URL (HTTPS)”, which will enable the “Host Name” and “Port” fields, and inserting or modifying the host name or port number according to his desire.After providing the required information, press “Retrieve certificates” button. In this example the URL https://jira.edulib.ro was used for retrieving the certificates:

    SSL Certificates Retriever Certificate Retrieved

    In this case, the response from the server is actually a chain of certificates, which starts with the organization's root certificate - “CA Cert Signing Authority” which we need to import into the Windows Root CA KeyStore. Any of the certificates from the chain can be imported as well if needed.

  3. Select the certificate to be imported into the Windows Root CA KeyStore and press “Import to KeyStore”. The user will be prompted to enter an alias for the certificate. The name of the certificate will be used as a suggestion.SSL Certificate Retriever Import Certificate Under a Specified Alias

    When importing a certificate into the Windows Root CA KeyStore, there is a security warning displayed by the operating system informing that a certificate will be installed, and asking for the user's permission:

    SSL Certificate Retriever Security Warning shown when Importing Certificate

    After clicking “Yes”, the certificate will be imported into the Windows Root CA KeyStore. The Open SSL Certificate Retriever dialog can be closed now by pressing “Close”.The new imported certificate will be visible into the Windows Root CA KeyStore, as it can be seen in the screenshot below:

    SSL Certificates Retriever with Imported Windows Root CA KeyStore

  4. After performing these simple steps to import the certificate, restart the browser for the changes to take effect. The security warning will no longer be displayed allowing to view the desired website.

    Certificate Trusted

    The organization root certificate can also be imported in a similar way into the Windows Root CA KeyStore from a different KeyStore if it exists already in a KeyStore using copy – paste operations. Also, it can be copied from a KeyPair, in a way similar to the one described in the post Simplifying key pair and certificate management operations with CERTivity.