Jump to content

cwaters

Members
  • Content Count

    16
  • Joined

  • Last visited

About cwaters

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello folks, I'd like to get some additional detail on what it takes to get in-place upgrades working for HA instances of Passwordstate. We've only been able to do a manual upgrade at this point and I'd like to begin to troubleshooting that. A few things to know: This is in Azure (seems fine) We use Azure SQL for the DB (also fine) We use the Azure LB technologies (Traffic Manager, AppGateway w/WAF) These OS on these servers are hardened (<-likely a contributing factor) I've gone over the upgrade guide but we've never been able to get ev
  2. I agree, the extension swapping isn't ideal, but it would provide a structured and repeatable process for a user to follow if we end up in that situation that would apply equally to all users, regardless of OS. The alternative being they loose the auto-fill functionality until the platform catches up. The group policy only works on windows, which still leaves out the Mac users (like myself) which we estimate to be about 40% of our user population. Thanks again.
  3. I think you're focused on the word beta and not the intent. I'm not suggesting you release a beta version in the store. The beta moniker is just a naming convention that demonstrates that these developers have more than one plugin in the store for the same thing. I can choose to run the beta or the stable. I have the option to control the version in that sense. In the same way if there were 2 different Passwordstate extensions in the store (you could name them anything you wanted), I would have the option to use the current Passwordstate plugin, or the previous. It's just a convention.
  4. Below is a random example from the chrome store. Basically, they are 2 separate extensions. You can find a lot of examples by searching for "beta" in the the store, and then redo the search to use the base extension name of one you find. In many cases, you'll see examples like below. Full disclosure that I haven't found anything that says you can/or cannot do this in the store but it seems like a quite a few devs do this. I assume when they are done with beta, the move the code to the non-beta plugin reference. I suggest something similar could be done when you release the latest version
  5. I put forth a a feature request which was already archived with a similar response. I would say that my suggestion to have a "Latest" and "Previous" version of the plugin available in the stores could have used more consideration Clickstudios. I see in the extension stores where stable and beta versions of plugins are available for many. I don't see a fundamental difference here for my suggestion but Clickstudios said they don't believe the stores allow this. Here's a link to the other thread which I can't reply to https://www.clickstudios.com.au/community/index.php?/topic/2849-browse
  6. Hi everyone, I'd like to recommend that previous versions of the browser extension be available in the extension stores. As a company with what will be a fairly large user base, we have the need to have a stable platform where dev/test/prod pipelines and proper change management can take some time. That time may exceed the duration from when a new version of the extension is released to when we can update the backend platform. The impact of having the extension automatically update and become non-functional because the main platform has not be updated is problematic
  7. Please correct me if I'm wrong but I believe what @AndersB is alluding to is if you want to manage access by using/importing existing security groups from your AD, you can't do that today with AAD even if you are syncing from AD to AAD. AAD is only for the authentication part. If that's not true, I'd love to know what I'm missing.
  8. Glad to see this discussion moving! Just to share the info (though we seem to be past these specifics): To answer the question previously about the HA capabilities of App Gateways in Azure, yes, they are somewhat limited as far as the health checks are concerned. The best way to describe them is "If I see this thing, you're healthy and I can send you traffic." There's no concept of "You're sick, I shouldn't send you traffic." The question about a web node not being able to get to the DB in a region. The DBs are replicated and available across multiple re
  9. We have a high availability setup in Azure for Passwordstate that consists of a Traffic Manager, and 2 sets of App Gateway load balanced clusters in different regions. We have encountered and issue with what happens when the DB is unreachable from a single region. Currently in Azure, the load balancing health checks available only look for "good" HTTP status codes and strings to determine health for the backend pools. In the case of Passwordstate, if one of the redundant servers/clusters behind the load balancing loses the ability to connect to the DB, the webserver still produces a good co
  10. The format is user@domain. In this case, the domain is is not the same as the email domain. I believe that most of what MS does in this area is here: https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-federation-saml-idp I found this note on the page: The “UserPrinciplName” value must match the value that you will send for “IDPEmail” in your SAML 2.0 claim and the “ImmutableID” value must match the value sent in your “NameID” assertion. I'll ask my admins how we have this implemented specifically as it may help
  11. In our case because we have a hybrid traditional on-premise AD with integration to Azure AD (not an uncommon situation I would wager), UPN still seems like the best fit. I would suggest that the change would be to allow user user to simply change the default for that specific attribute (Name Identifier). That way, if a user doesn't actually have an email address, there is still a value to match against. I'm not sure if that makes sense or not or perhaps I mis-understand how the SAML is supposed to work. We have a work around by adding a bogus email address on the AD side, but it
  12. Thanks for the response. I was able to get this to work and it appears it had to actually do with the IIS setting for Anonymous access needing to be enabled. I should have kept better notes on my testing but I believe that was the change that enabled this to work as expected for me. I'll try to test again to confirm that. With SAML 2, is it not the case that you can choose which attributes can be used for the Name Identifier?. Maybe a feature request would be to allow this to be configurable on the Passwordstate side (it is on the Azure SSO side). A current use case is for adm
  13. Other than needing to login twice, once for AD and once for Radius, you "can" use Azure MFA with a NPS server with the Azure MFA extension installed. You will need to be using the "push" notifications for the Authenticator app but this does work. I tested it today as a matter of fact. https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-nps-extension
  14. Hi everyone, We're trying to take advantage of using SAML with Azure SSO (and Azure MFA) but are encountering some issues with the user mapping. I followed the guide from the Security Admin Manual. Our userprincipalnames don't match our email and that may be part of the issue. SAML is "working" based on the error we get but I'm not sure where to go from here (Error 1). I've also tried re-mapping the SAML token attributes on the Azure SSO side so that the emailaddress was the user.userprincipalname value and various combinations but anything other than the User Identifier as use
  15. I'm generalizing of course about the kinds of accounts this could be done on, but I was thinking it could apply to just about any account type where there's already a password reset script. The problem I was specifically trying to solve for was for some Oracle DB accounts and was provided a script block to try to test by modifying a reset script. I would say that a bulk of the work is already done in the sense the reset scripts are already talking to the system/DBs. The scripts would just need to issue the "lock" command vs. the reset command. As for the specific execution, I was thinking
×
×
  • Create New...