Security

Melange can only have moderate security...

Melange will use google account log ins extensively. Because of various sophisticated (and some less sophisticated) forms of malware, each google account-password combination cannot be considered as having high security - particularly given the large numbers of users. For example, ‘‘an admin in a mentoring organisation may work on a compromised machine and accidentally reveal his or her google account password’’. This could be used to maliciously change the delivery addresses for the mentor organisation, change ‘payment not required’ to ‘not received yet’ - if Melange tracks that - and for other more devious acts. It needs to be understood in the design that the system as a whole is medium to low in security, and parts involving decisions on real money must be kept cleanly outside melange itself.

Exactly where that clean dividing line is needs very careful thought and input from developers with sound security experience. The security ‘dividing line’ creates a need to at times be able to generate reports, e.g. lists of people who are apparently to be paid, that can be exported to a more secure system from which payments can actually be made.

There are also important security issues with regard to html in responses. A general filter will be in place to prevent, for example, a student‘s or an organisation’s application containing javascript.

Security Thinking

Improving our security awareness will also help in other ways. For example:

  • Analysis of logs for ‘suspicious activity’ will help us be more aware of how Melange is actually being used. But logs are hard to get and analyse currently
  • Becoming more organised about manual testing to test for security issues will help us with other testing too.
  • Supporting GSoC orgs in their security measures (allowing large orgs to pass PGP keys to verify mentor sign up) will lead to a better Melange in other ways too. This was the essential content of Issue 385 (on Google Code), a request for an additional text field.

The issue of security isn‘t entirely theoretical. In 2008 a large number of GSoC organizations were approached by several people whom they ‘did not know’ asking to be mentors, one of whom may have been trying to impersonate Google’s David Anderson, hence the !notme botcommand on #gsoc.

Edit Hint: Move the two security sections to a new page and keep this page just for ACLs?

Some Security Links

Access Model

Different users/roles have different permissions. They even vary over time.

  1. It is probably not possible to fully designate all the permissions and roles up front. For example, a role as a mentor is not the same as a role as someone-who-is-attending-mentor-summit. In an umbrella organisation with say > 20 active projects a super admin may have oversight over all proposals and projects for that org, whereas a sub admin might have oversight over more restricted areas.
  2. A clean design factoring separates access rights from page display code. A proposal is to have a central access control mechanism that, given a field identifier and a request (probably HTTP request) can tell ‘‘for this request’’ whether: * The field can be viewed at all. * The field is read only. * The field can be modified.

It's expected that pages will share some centrally provided formatting routines that automatically choose whether to show a field as an editable or non editable box based on the permissions.

A highly privileged user may drop down to a lower role to see a page as a lower user would see it. Many pages may be read-only by default, with the option to open an editable view available to sufficiently authorised users.

Terms of Use

Terms-of-Use figure in the access control

The access model is closely tied into signing up to the terms of use. The results of the terms of use question will figure in access control logic.